Test Report: Docker_Linux_crio 22049

                    
                      b350bc6d66813cad84bbff620e1b65ef38f64c38:2025-12-06:42657
                    
                

Test fail (26/415)

x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable volcano --alsologtostderr -v=1: exit status 11 (253.287857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:16.038101   18855 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:16.038399   18855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:16.038409   18855 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:16.038414   18855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:16.038671   18855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:16.038969   18855 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:16.039360   18855 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:16.039382   18855 addons.go:622] checking whether the cluster is paused
	I1206 08:30:16.039482   18855 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:16.039499   18855 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:16.039886   18855 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:16.059113   18855 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:16.059169   18855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:16.078358   18855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:16.170933   18855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:16.171046   18855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:16.200060   18855 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:16.200085   18855 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:16.200091   18855 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:16.200096   18855 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:16.200101   18855 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:16.200106   18855 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:16.200110   18855 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:16.200114   18855 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:16.200118   18855 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:16.200126   18855 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:16.200131   18855 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:16.200135   18855 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:16.200140   18855 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:16.200145   18855 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:16.200156   18855 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:16.200173   18855 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:16.200181   18855 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:16.200188   18855 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:16.200193   18855 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:16.200197   18855 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:16.200203   18855 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:16.200211   18855 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:16.200216   18855 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:16.200224   18855 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:16.200230   18855 cri.go:89] found id: ""
	I1206 08:30:16.200290   18855 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:16.214786   18855 out.go:203] 
	W1206 08:30:16.216224   18855 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:16.216246   18855 out.go:285] * 
	* 
	W1206 08:30:16.219399   18855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:16.220849   18855 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.567068ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002763701s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003368738s
addons_test.go:392: (dbg) Run:  kubectl --context addons-765040 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-765040 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-765040 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.057045627s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable registry --alsologtostderr -v=1: exit status 11 (310.20138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:39.432969   20599 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:39.433339   20599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:39.433347   20599 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:39.433354   20599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:39.433673   20599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:39.434037   20599 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:39.434490   20599 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:39.434530   20599 addons.go:622] checking whether the cluster is paused
	I1206 08:30:39.434680   20599 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:39.434705   20599 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:39.435326   20599 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:39.459205   20599 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:39.459273   20599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:39.485148   20599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:39.590969   20599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:39.591075   20599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:39.631452   20599 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:39.631490   20599 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:39.631497   20599 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:39.631502   20599 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:39.631506   20599 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:39.631512   20599 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:39.631517   20599 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:39.631521   20599 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:39.631525   20599 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:39.631579   20599 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:39.631590   20599 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:39.631595   20599 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:39.631603   20599 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:39.631608   20599 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:39.631616   20599 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:39.631627   20599 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:39.631633   20599 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:39.631639   20599 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:39.631643   20599 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:39.631648   20599 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:39.631652   20599 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:39.631665   20599 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:39.631670   20599 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:39.631674   20599 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:39.631678   20599 cri.go:89] found id: ""
	I1206 08:30:39.631735   20599 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:39.650348   20599 out.go:203] 
	W1206 08:30:39.651643   20599 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:39.651672   20599 out.go:285] * 
	* 
	W1206 08:30:39.656798   20599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:39.658501   20599 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.63s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.972915ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-765040
addons_test.go:332: (dbg) Run:  kubectl --context addons-765040 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (251.291113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:31.526199   19498 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:31.526507   19498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:31.526523   19498 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:31.526529   19498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:31.526740   19498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:31.527013   19498 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:31.527415   19498 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:31.527440   19498 addons.go:622] checking whether the cluster is paused
	I1206 08:30:31.527528   19498 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:31.527544   19498 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:31.527926   19498 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:31.546366   19498 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:31.546431   19498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:31.567407   19498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:31.660567   19498 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:31.660653   19498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:31.692480   19498 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:31.692504   19498 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:31.692510   19498 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:31.692515   19498 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:31.692520   19498 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:31.692526   19498 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:31.692530   19498 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:31.692535   19498 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:31.692539   19498 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:31.692560   19498 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:31.692568   19498 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:31.692573   19498 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:31.692581   19498 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:31.692586   19498 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:31.692592   19498 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:31.692607   19498 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:31.692617   19498 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:31.692625   19498 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:31.692670   19498 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:31.692679   19498 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:31.692684   19498 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:31.692691   19498 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:31.692694   19498 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:31.692700   19498 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:31.692705   19498 cri.go:89] found id: ""
	I1206 08:30:31.692754   19498 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:31.710147   19498 out.go:203] 
	W1206 08:30:31.711587   19498 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:31.711608   19498 out.go:285] * 
	* 
	W1206 08:30:31.714490   19498 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:31.715961   19498 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-765040 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-765040 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-765040 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [bc01b42b-526c-4fb6-bf48-d0aee4699fd5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [bc01b42b-526c-4fb6-bf48-d0aee4699fd5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002651157s
I1206 08:30:40.737058    9158 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.75718191s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-765040 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-765040
helpers_test.go:243: (dbg) docker inspect addons-765040:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f",
	        "Created": "2025-12-06T08:28:37.206934469Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:28:37.246417696Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/hosts",
	        "LogPath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f-json.log",
	        "Name": "/addons-765040",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-765040:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-765040",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f",
	                "LowerDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-765040",
	                "Source": "/var/lib/docker/volumes/addons-765040/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-765040",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-765040",
	                "name.minikube.sigs.k8s.io": "addons-765040",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "149907238c9b06fe904b5ec6d924983f773136a3bcf194f7f91647f015ecb15f",
	            "SandboxKey": "/var/run/docker/netns/149907238c9b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-765040": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fc234aa0001004b99f75640e5a6f610b5693a87a6c2ea28dadc06a580b327e0",
	                    "EndpointID": "cc27eb7806ba1e0b0f14efd409f935e0ccfec14d25cf37e14cabc84e9d21dc92",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "e2:83:ea:3c:05:06",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-765040",
	                        "e6ebd39802c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-765040 -n addons-765040
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-765040 logs -n 25: (1.110402469s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-791651 --alsologtostderr --binary-mirror http://127.0.0.1:42485 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-791651 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-791651                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-791651 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ addons  │ enable dashboard -p addons-765040                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-765040                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ start   │ -p addons-765040 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-765040                                                                                                                                                                                                                                                                                                                                                                                           │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ ip      │ addons-765040 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ ssh     │ addons-765040 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ ssh     │ addons-765040 ssh cat /opt/local-path-provisioner/pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ enable headlamp -p addons-765040 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-765040 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │                     │
	│ ip      │ addons-765040 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-765040        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:13.517551   10917 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:13.517870   10917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:13.517884   10917 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:13.517888   10917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:13.518119   10917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:28:13.518632   10917 out.go:368] Setting JSON to false
	I1206 08:28:13.519436   10917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":645,"bootTime":1765009049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:13.519491   10917 start.go:143] virtualization: kvm guest
	I1206 08:28:13.521363   10917 out.go:179] * [addons-765040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:28:13.522876   10917 notify.go:221] Checking for updates...
	I1206 08:28:13.522890   10917 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:28:13.524088   10917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:13.525293   10917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:28:13.526449   10917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:28:13.527747   10917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:28:13.529002   10917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:28:13.530460   10917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:13.552868   10917 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:28:13.552951   10917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:13.605014   10917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 08:28:13.596028874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:13.605111   10917 docker.go:319] overlay module found
	I1206 08:28:13.606844   10917 out.go:179] * Using the docker driver based on user configuration
	I1206 08:28:13.608017   10917 start.go:309] selected driver: docker
	I1206 08:28:13.608034   10917 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:13.608045   10917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:28:13.608589   10917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:13.659285   10917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 08:28:13.65040943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:13.659460   10917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:13.659756   10917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:28:13.661710   10917 out.go:179] * Using Docker driver with root privileges
	I1206 08:28:13.662981   10917 cni.go:84] Creating CNI manager for ""
	I1206 08:28:13.663069   10917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 08:28:13.663085   10917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 08:28:13.663167   10917 start.go:353] cluster config:
	{Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 08:28:13.664560   10917 out.go:179] * Starting "addons-765040" primary control-plane node in "addons-765040" cluster
	I1206 08:28:13.665892   10917 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 08:28:13.667139   10917 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:28:13.668333   10917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:13.668372   10917 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 08:28:13.668378   10917 cache.go:65] Caching tarball of preloaded images
	I1206 08:28:13.668432   10917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:28:13.668451   10917 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 08:28:13.668475   10917 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 08:28:13.668799   10917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/config.json ...
	I1206 08:28:13.668820   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/config.json: {Name:mkc4940cc63cbd4e42707a0b9fa12c640aed83ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:13.685770   10917 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 08:28:13.685897   10917 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 08:28:13.685917   10917 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 08:28:13.685923   10917 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 08:28:13.685933   10917 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 08:28:13.685943   10917 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 08:28:26.872277   10917 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 08:28:26.872331   10917 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:28:26.872376   10917 start.go:360] acquireMachinesLock for addons-765040: {Name:mk815f37680f889a77215d594e93dfa4e4ffc3d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:28:26.872483   10917 start.go:364] duration metric: took 84.449µs to acquireMachinesLock for "addons-765040"
	I1206 08:28:26.872513   10917 start.go:93] Provisioning new machine with config: &{Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 08:28:26.872585   10917 start.go:125] createHost starting for "" (driver="docker")
	I1206 08:28:26.875082   10917 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 08:28:26.875303   10917 start.go:159] libmachine.API.Create for "addons-765040" (driver="docker")
	I1206 08:28:26.875336   10917 client.go:173] LocalClient.Create starting
	I1206 08:28:26.875447   10917 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 08:28:26.946406   10917 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 08:28:27.114294   10917 cli_runner.go:164] Run: docker network inspect addons-765040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 08:28:27.132200   10917 cli_runner.go:211] docker network inspect addons-765040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 08:28:27.132264   10917 network_create.go:284] running [docker network inspect addons-765040] to gather additional debugging logs...
	I1206 08:28:27.132277   10917 cli_runner.go:164] Run: docker network inspect addons-765040
	W1206 08:28:27.147367   10917 cli_runner.go:211] docker network inspect addons-765040 returned with exit code 1
	I1206 08:28:27.147395   10917 network_create.go:287] error running [docker network inspect addons-765040]: docker network inspect addons-765040: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-765040 not found
	I1206 08:28:27.147421   10917 network_create.go:289] output of [docker network inspect addons-765040]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-765040 not found
	
	** /stderr **
	I1206 08:28:27.147512   10917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:28:27.164356   10917 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c468b0}
	I1206 08:28:27.164386   10917 network_create.go:124] attempt to create docker network addons-765040 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 08:28:27.164435   10917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-765040 addons-765040
	I1206 08:28:27.209037   10917 network_create.go:108] docker network addons-765040 192.168.49.0/24 created
	I1206 08:28:27.209088   10917 kic.go:121] calculated static IP "192.168.49.2" for the "addons-765040" container
	I1206 08:28:27.209152   10917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 08:28:27.225039   10917 cli_runner.go:164] Run: docker volume create addons-765040 --label name.minikube.sigs.k8s.io=addons-765040 --label created_by.minikube.sigs.k8s.io=true
	I1206 08:28:27.241947   10917 oci.go:103] Successfully created a docker volume addons-765040
	I1206 08:28:27.242043   10917 cli_runner.go:164] Run: docker run --rm --name addons-765040-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-765040 --entrypoint /usr/bin/test -v addons-765040:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 08:28:33.349737   10917 cli_runner.go:217] Completed: docker run --rm --name addons-765040-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-765040 --entrypoint /usr/bin/test -v addons-765040:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (6.107650132s)
	I1206 08:28:33.349763   10917 oci.go:107] Successfully prepared a docker volume addons-765040
	I1206 08:28:33.349817   10917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:33.349828   10917 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 08:28:33.349876   10917 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-765040:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 08:28:37.139539   10917 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-765040:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.789608888s)
	I1206 08:28:37.139585   10917 kic.go:203] duration metric: took 3.78975333s to extract preloaded images to volume ...
	W1206 08:28:37.139675   10917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 08:28:37.139717   10917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 08:28:37.139755   10917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 08:28:37.191680   10917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-765040 --name addons-765040 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-765040 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-765040 --network addons-765040 --ip 192.168.49.2 --volume addons-765040:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 08:28:37.483915   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Running}}
	I1206 08:28:37.502886   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:37.521541   10917 cli_runner.go:164] Run: docker exec addons-765040 stat /var/lib/dpkg/alternatives/iptables
	I1206 08:28:37.573855   10917 oci.go:144] the created container "addons-765040" has a running status.
	I1206 08:28:37.573883   10917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa...
	I1206 08:28:37.666669   10917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 08:28:37.691472   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:37.714332   10917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 08:28:37.714359   10917 kic_runner.go:114] Args: [docker exec --privileged addons-765040 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 08:28:37.757534   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:37.782718   10917 machine.go:94] provisionDockerMachine start ...
	I1206 08:28:37.782849   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:37.805148   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:37.805980   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:37.806020   10917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:28:37.940896   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-765040
	
	I1206 08:28:37.940926   10917 ubuntu.go:182] provisioning hostname "addons-765040"
	I1206 08:28:37.941003   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:37.961076   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:37.961400   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:37.961425   10917 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-765040 && echo "addons-765040" | sudo tee /etc/hostname
	I1206 08:28:38.098462   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-765040
	
	I1206 08:28:38.098538   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.118617   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:38.118855   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:38.118881   10917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-765040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-765040/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-765040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:28:38.245402   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:28:38.245433   10917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 08:28:38.245451   10917 ubuntu.go:190] setting up certificates
	I1206 08:28:38.245459   10917 provision.go:84] configureAuth start
	I1206 08:28:38.245503   10917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-765040
	I1206 08:28:38.262175   10917 provision.go:143] copyHostCerts
	I1206 08:28:38.262242   10917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 08:28:38.262368   10917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 08:28:38.262444   10917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 08:28:38.262546   10917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.addons-765040 san=[127.0.0.1 192.168.49.2 addons-765040 localhost minikube]
	I1206 08:28:38.279887   10917 provision.go:177] copyRemoteCerts
	I1206 08:28:38.279929   10917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:28:38.279957   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.296426   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:38.389028   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 08:28:38.407537   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 08:28:38.424272   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 08:28:38.440577   10917 provision.go:87] duration metric: took 195.106008ms to configureAuth
	I1206 08:28:38.440605   10917 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:28:38.440811   10917 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:28:38.440913   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.459664   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:38.459885   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:38.459905   10917 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 08:28:38.720520   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 08:28:38.720548   10917 machine.go:97] duration metric: took 937.801035ms to provisionDockerMachine
	I1206 08:28:38.720559   10917 client.go:176] duration metric: took 11.845205717s to LocalClient.Create
	I1206 08:28:38.720579   10917 start.go:167] duration metric: took 11.845275252s to libmachine.API.Create "addons-765040"
	I1206 08:28:38.720589   10917 start.go:293] postStartSetup for "addons-765040" (driver="docker")
	I1206 08:28:38.720602   10917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:28:38.720664   10917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:28:38.720720   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.738076   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:38.831377   10917 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:28:38.834534   10917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:28:38.834560   10917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:28:38.834574   10917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 08:28:38.834628   10917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 08:28:38.834653   10917 start.go:296] duration metric: took 114.057967ms for postStartSetup
	I1206 08:28:38.834949   10917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-765040
	I1206 08:28:38.851945   10917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/config.json ...
	I1206 08:28:38.852223   10917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:28:38.852274   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.871235   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:38.959882   10917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:28:38.964209   10917 start.go:128] duration metric: took 12.091610543s to createHost
	I1206 08:28:38.964234   10917 start.go:83] releasing machines lock for "addons-765040", held for 12.091737561s
	I1206 08:28:38.964293   10917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-765040
	I1206 08:28:38.981586   10917 ssh_runner.go:195] Run: cat /version.json
	I1206 08:28:38.981669   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.981766   10917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:28:38.981838   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:39.001373   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:39.001393   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:39.145883   10917 ssh_runner.go:195] Run: systemctl --version
	I1206 08:28:39.152038   10917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 08:28:39.184647   10917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:28:39.189152   10917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:28:39.189220   10917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:28:39.213372   10917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 08:28:39.213398   10917 start.go:496] detecting cgroup driver to use...
	I1206 08:28:39.213429   10917 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 08:28:39.213475   10917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 08:28:39.228363   10917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 08:28:39.239867   10917 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:28:39.239923   10917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:28:39.255437   10917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:28:39.271764   10917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:28:39.353206   10917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:28:39.433938   10917 docker.go:234] disabling docker service ...
	I1206 08:28:39.434013   10917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:28:39.451394   10917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:28:39.463652   10917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:28:39.547471   10917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:28:39.623956   10917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:28:39.636124   10917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:28:39.649762   10917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 08:28:39.649817   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.660041   10917 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 08:28:39.660102   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.668996   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.677331   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.686027   10917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:28:39.693878   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.702442   10917 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.716354   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.725208   10917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:28:39.732625   10917 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 08:28:39.732686   10917 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 08:28:39.744723   10917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:28:39.752658   10917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:28:39.828807   10917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 08:28:39.965608   10917 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 08:28:39.965669   10917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 08:28:39.969462   10917 start.go:564] Will wait 60s for crictl version
	I1206 08:28:39.969517   10917 ssh_runner.go:195] Run: which crictl
	I1206 08:28:39.972887   10917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:28:39.996063   10917 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 08:28:39.996173   10917 ssh_runner.go:195] Run: crio --version
	I1206 08:28:40.023126   10917 ssh_runner.go:195] Run: crio --version
	I1206 08:28:40.051214   10917 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 08:28:40.052722   10917 cli_runner.go:164] Run: docker network inspect addons-765040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:28:40.069994   10917 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:28:40.074025   10917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:28:40.083977   10917 kubeadm.go:884] updating cluster {Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:28:40.084123   10917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:40.084173   10917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:28:40.114788   10917 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 08:28:40.114807   10917 crio.go:433] Images already preloaded, skipping extraction
	I1206 08:28:40.114849   10917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:28:40.138740   10917 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 08:28:40.138761   10917 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:28:40.138769   10917 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 08:28:40.138856   10917 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-765040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:28:40.138920   10917 ssh_runner.go:195] Run: crio config
	I1206 08:28:40.183335   10917 cni.go:84] Creating CNI manager for ""
	I1206 08:28:40.183355   10917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 08:28:40.183367   10917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:28:40.183391   10917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-765040 NodeName:addons-765040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:28:40.183515   10917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-765040"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:28:40.183574   10917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 08:28:40.191272   10917 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:28:40.191339   10917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:28:40.198455   10917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 08:28:40.210144   10917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 08:28:40.224387   10917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 08:28:40.236760   10917 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:28:40.240230   10917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:28:40.249788   10917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:28:40.323463   10917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:28:40.346323   10917 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040 for IP: 192.168.49.2
	I1206 08:28:40.346350   10917 certs.go:195] generating shared ca certs ...
	I1206 08:28:40.346377   10917 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.346498   10917 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 08:28:40.437423   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt ...
	I1206 08:28:40.437452   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt: {Name:mk787430aa62b15e4c09755ea69ecf9fe7fa9f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.437627   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key ...
	I1206 08:28:40.437638   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key: {Name:mk563f3855d73e541816d90ff60f762f79826240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.437712   10917 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 08:28:40.556932   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt ...
	I1206 08:28:40.556962   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt: {Name:mk91c1d7726b80ca7113f5af7ecec813b675696a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.557156   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key ...
	I1206 08:28:40.557169   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key: {Name:mk8d3b98839a40feddb9b7b002317adb40731e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.557243   10917 certs.go:257] generating profile certs ...
	I1206 08:28:40.557295   10917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.key
	I1206 08:28:40.557309   10917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt with IP's: []
	I1206 08:28:40.683441   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt ...
	I1206 08:28:40.683487   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: {Name:mk7fcad273551a9b3aa2bddec0275a506cba529c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.683652   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.key ...
	I1206 08:28:40.683663   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.key: {Name:mk887fa76bbca443414283d235432f7d8d352866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.683734   10917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716
	I1206 08:28:40.683753   10917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 08:28:40.857859   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716 ...
	I1206 08:28:40.857891   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716: {Name:mk3f9ea382a0ae431eb357f49d155fb1f62ef1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.858071   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716 ...
	I1206 08:28:40.858085   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716: {Name:mk52e10fc020597e20b09c5a443cf291499ee32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.858157   10917 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt
	I1206 08:28:40.858237   10917 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key
	I1206 08:28:40.858286   10917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key
	I1206 08:28:40.858303   10917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt with IP's: []
	I1206 08:28:40.962968   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt ...
	I1206 08:28:40.963007   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt: {Name:mk19789c186eed481733390928a022e0cbad9d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.963218   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key ...
	I1206 08:28:40.963239   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key: {Name:mk56dadf43ac756d308eaa62cfea0ebe0d85fc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.963450   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:28:40.963528   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 08:28:40.963561   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:28:40.963588   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 08:28:40.964102   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:28:40.981812   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 08:28:40.998535   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:28:41.015644   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:28:41.033071   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 08:28:41.049921   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 08:28:41.067501   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:28:41.084820   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:28:41.101841   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:28:41.120282   10917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:28:41.132295   10917 ssh_runner.go:195] Run: openssl version
	I1206 08:28:41.138150   10917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.145197   10917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:28:41.154739   10917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.158338   10917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.158385   10917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.193034   10917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:28:41.200588   10917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 08:28:41.208251   10917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:28:41.211716   10917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 08:28:41.211770   10917 kubeadm.go:401] StartCluster: {Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:28:41.211856   10917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:28:41.211917   10917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:28:41.238074   10917 cri.go:89] found id: ""
	I1206 08:28:41.238146   10917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:28:41.246144   10917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:28:41.253646   10917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:28:41.253697   10917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:28:41.261311   10917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:28:41.261327   10917 kubeadm.go:158] found existing configuration files:
	
	I1206 08:28:41.261372   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 08:28:41.268604   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:28:41.268664   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:28:41.275273   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 08:28:41.282344   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:28:41.282398   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:28:41.289352   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 08:28:41.296566   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:28:41.296626   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:28:41.303534   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 08:28:41.310584   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:28:41.310628   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:28:41.317435   10917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:28:41.352193   10917 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 08:28:41.352260   10917 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:28:41.371285   10917 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:28:41.371350   10917 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 08:28:41.371419   10917 kubeadm.go:319] OS: Linux
	I1206 08:28:41.371510   10917 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:28:41.371573   10917 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:28:41.371623   10917 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:28:41.371673   10917 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:28:41.371712   10917 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:28:41.371755   10917 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:28:41.371794   10917 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:28:41.371835   10917 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 08:28:41.424546   10917 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:28:41.424642   10917 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:28:41.424758   10917 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:28:41.430850   10917 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:28:41.432830   10917 out.go:252]   - Generating certificates and keys ...
	I1206 08:28:41.432900   10917 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:28:41.433009   10917 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:28:41.559848   10917 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 08:28:41.831564   10917 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 08:28:42.040592   10917 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 08:28:42.253502   10917 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 08:28:42.444686   10917 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 08:28:42.444839   10917 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-765040 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 08:28:42.580933   10917 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 08:28:42.581082   10917 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-765040 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 08:28:42.971101   10917 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 08:28:43.073281   10917 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 08:28:43.389764   10917 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 08:28:43.389850   10917 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:28:43.827808   10917 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:28:44.354475   10917 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:28:44.893741   10917 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:28:45.069716   10917 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:28:45.375477   10917 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:28:45.375830   10917 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:28:45.379453   10917 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:28:45.381820   10917 out.go:252]   - Booting up control plane ...
	I1206 08:28:45.381952   10917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:28:45.382091   10917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:28:45.382886   10917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:28:45.396073   10917 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:28:45.396238   10917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:28:45.402491   10917 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:28:45.402775   10917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:28:45.402837   10917 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:28:45.500380   10917 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:28:45.500541   10917 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 08:28:46.002336   10917 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915255ms
	I1206 08:28:46.005100   10917 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 08:28:46.005230   10917 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 08:28:46.005322   10917 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 08:28:46.005401   10917 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 08:28:47.742014   10917 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.736809794s
	I1206 08:28:48.107478   10917 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.102317882s
	I1206 08:28:50.007191   10917 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002039342s
	I1206 08:28:50.023487   10917 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 08:28:50.033655   10917 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 08:28:50.041965   10917 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 08:28:50.042206   10917 kubeadm.go:319] [mark-control-plane] Marking the node addons-765040 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 08:28:50.049790   10917 kubeadm.go:319] [bootstrap-token] Using token: 0jkuew.7iwu0edepru23801
	I1206 08:28:50.051160   10917 out.go:252]   - Configuring RBAC rules ...
	I1206 08:28:50.051320   10917 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 08:28:50.055239   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 08:28:50.060157   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 08:28:50.062402   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 08:28:50.064852   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 08:28:50.067185   10917 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 08:28:50.411875   10917 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 08:28:50.826152   10917 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 08:28:51.413496   10917 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 08:28:51.414622   10917 kubeadm.go:319] 
	I1206 08:28:51.414729   10917 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 08:28:51.414739   10917 kubeadm.go:319] 
	I1206 08:28:51.414800   10917 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 08:28:51.414806   10917 kubeadm.go:319] 
	I1206 08:28:51.414826   10917 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 08:28:51.414876   10917 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 08:28:51.414918   10917 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 08:28:51.414924   10917 kubeadm.go:319] 
	I1206 08:28:51.415047   10917 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 08:28:51.415066   10917 kubeadm.go:319] 
	I1206 08:28:51.415144   10917 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 08:28:51.415162   10917 kubeadm.go:319] 
	I1206 08:28:51.415215   10917 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 08:28:51.415315   10917 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 08:28:51.415421   10917 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 08:28:51.415434   10917 kubeadm.go:319] 
	I1206 08:28:51.415551   10917 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 08:28:51.415643   10917 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 08:28:51.415655   10917 kubeadm.go:319] 
	I1206 08:28:51.415765   10917 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0jkuew.7iwu0edepru23801 \
	I1206 08:28:51.415909   10917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 08:28:51.415942   10917 kubeadm.go:319] 	--control-plane 
	I1206 08:28:51.415957   10917 kubeadm.go:319] 
	I1206 08:28:51.416096   10917 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 08:28:51.416109   10917 kubeadm.go:319] 
	I1206 08:28:51.416207   10917 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0jkuew.7iwu0edepru23801 \
	I1206 08:28:51.416381   10917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 08:28:51.417512   10917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 08:28:51.417633   10917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 08:28:51.417648   10917 cni.go:84] Creating CNI manager for ""
	I1206 08:28:51.417658   10917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 08:28:51.420097   10917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 08:28:51.421571   10917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 08:28:51.425641   10917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 08:28:51.425658   10917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 08:28:51.438402   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 08:28:51.639221   10917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 08:28:51.639278   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:51.639282   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-765040 minikube.k8s.io/updated_at=2025_12_06T08_28_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=addons-765040 minikube.k8s.io/primary=true
	I1206 08:28:51.648303   10917 ops.go:34] apiserver oom_adj: -16
	I1206 08:28:51.707710   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:52.208495   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:52.708012   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:53.208748   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:53.708688   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:54.208542   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:54.707772   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:55.208144   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:55.708348   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:56.208329   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:56.272404   10917 kubeadm.go:1114] duration metric: took 4.633179973s to wait for elevateKubeSystemPrivileges
	I1206 08:28:56.272442   10917 kubeadm.go:403] duration metric: took 15.060678031s to StartCluster
	I1206 08:28:56.272462   10917 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:56.272580   10917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:28:56.272945   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:56.273177   10917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 08:28:56.273198   10917 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 08:28:56.273275   10917 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 08:28:56.273398   10917 addons.go:70] Setting yakd=true in profile "addons-765040"
	I1206 08:28:56.273407   10917 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:28:56.273419   10917 addons.go:239] Setting addon yakd=true in "addons-765040"
	I1206 08:28:56.273420   10917 addons.go:70] Setting registry-creds=true in profile "addons-765040"
	I1206 08:28:56.273433   10917 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-765040"
	I1206 08:28:56.273413   10917 addons.go:70] Setting ingress-dns=true in profile "addons-765040"
	I1206 08:28:56.273456   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273463   10917 addons.go:70] Setting inspektor-gadget=true in profile "addons-765040"
	I1206 08:28:56.273467   10917 addons.go:239] Setting addon ingress-dns=true in "addons-765040"
	I1206 08:28:56.273475   10917 addons.go:239] Setting addon inspektor-gadget=true in "addons-765040"
	I1206 08:28:56.273478   10917 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-765040"
	I1206 08:28:56.273496   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273501   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273511   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273437   10917 addons.go:239] Setting addon registry-creds=true in "addons-765040"
	I1206 08:28:56.273628   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273635   10917 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-765040"
	I1206 08:28:56.273655   10917 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-765040"
	I1206 08:28:56.273677   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274030   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274034   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274054   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274090   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274138   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274314   10917 addons.go:70] Setting storage-provisioner=true in profile "addons-765040"
	I1206 08:28:56.274615   10917 addons.go:239] Setting addon storage-provisioner=true in "addons-765040"
	I1206 08:28:56.274306   10917 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-765040"
	I1206 08:28:56.275192   10917 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-765040"
	I1206 08:28:56.275530   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.275932   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273455   10917 addons.go:70] Setting metrics-server=true in profile "addons-765040"
	I1206 08:28:56.276496   10917 addons.go:239] Setting addon metrics-server=true in "addons-765040"
	I1206 08:28:56.276528   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.276931   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.276972   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.278070   10917 out.go:179] * Verifying Kubernetes components...
	I1206 08:28:56.274340   10917 addons.go:70] Setting default-storageclass=true in profile "addons-765040"
	I1206 08:28:56.278395   10917 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-765040"
	I1206 08:28:56.278813   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274349   10917 addons.go:70] Setting cloud-spanner=true in profile "addons-765040"
	I1206 08:28:56.279316   10917 addons.go:239] Setting addon cloud-spanner=true in "addons-765040"
	I1206 08:28:56.279359   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274357   10917 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-765040"
	I1206 08:28:56.279591   10917 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-765040"
	I1206 08:28:56.279624   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.279846   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.280196   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274368   10917 addons.go:70] Setting registry=true in profile "addons-765040"
	I1206 08:28:56.280432   10917 addons.go:239] Setting addon registry=true in "addons-765040"
	I1206 08:28:56.280463   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274379   10917 addons.go:70] Setting gcp-auth=true in profile "addons-765040"
	I1206 08:28:56.282114   10917 mustload.go:66] Loading cluster: addons-765040
	I1206 08:28:56.274389   10917 addons.go:70] Setting volcano=true in profile "addons-765040"
	I1206 08:28:56.282301   10917 addons.go:239] Setting addon volcano=true in "addons-765040"
	I1206 08:28:56.282331   10917 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:28:56.282334   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.282604   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.282822   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274411   10917 addons.go:70] Setting ingress=true in profile "addons-765040"
	I1206 08:28:56.282901   10917 addons.go:239] Setting addon ingress=true in "addons-765040"
	I1206 08:28:56.282937   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274875   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.285164   10917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:28:56.274399   10917 addons.go:70] Setting volumesnapshots=true in profile "addons-765040"
	I1206 08:28:56.285420   10917 addons.go:239] Setting addon volumesnapshots=true in "addons-765040"
	I1206 08:28:56.285450   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.285911   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.288727   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.292909   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.316317   10917 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 08:28:56.320272   10917 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 08:28:56.320301   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 08:28:56.320364   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.334657   10917 addons.go:239] Setting addon default-storageclass=true in "addons-765040"
	I1206 08:28:56.334706   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.335236   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.338230   10917 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1206 08:28:56.343845   10917 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 08:28:56.343869   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 08:28:56.343929   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.355053   10917 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-765040"
	I1206 08:28:56.359739   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.360960   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.364178   10917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:28:56.365145   10917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:28:56.365166   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:28:56.365225   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.370526   10917 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 08:28:56.371963   10917 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 08:28:56.372087   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 08:28:56.372178   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.374693   10917 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 08:28:56.377536   10917 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 08:28:56.377557   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 08:28:56.377618   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.394564   10917 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 08:28:56.395474   10917 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 08:28:56.397169   10917 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 08:28:56.397187   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 08:28:56.397275   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.397763   10917 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 08:28:56.398833   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:28:56.399089   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 08:28:56.399524   10917 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 08:28:56.399376   10917 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 08:28:56.400136   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 08:28:56.400620   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.400956   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 08:28:56.400972   10917 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 08:28:56.400291   10917 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 08:28:56.401129   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.402353   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:28:56.402503   10917 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 08:28:56.402515   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 08:28:56.403242   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.404083   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 08:28:56.405786   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1206 08:28:56.405903   10917 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	W1206 08:28:56.407282   10917 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 08:28:56.407399   10917 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 08:28:56.407414   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 08:28:56.407510   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.407715   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 08:28:56.407824   10917 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 08:28:56.407834   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 08:28:56.407899   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.410457   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 08:28:56.411683   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 08:28:56.412878   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 08:28:56.413954   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 08:28:56.417157   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 08:28:56.418595   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 08:28:56.418617   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 08:28:56.418687   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.420058   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.424498   10917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 08:28:56.426584   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 08:28:56.427920   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 08:28:56.427951   10917 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 08:28:56.428093   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.442077   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.445913   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.452814   10917 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:28:56.452838   10917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:28:56.452895   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.457214   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.465423   10917 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 08:28:56.466817   10917 out.go:179]   - Using image docker.io/busybox:stable
	I1206 08:28:56.468222   10917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 08:28:56.468241   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 08:28:56.468307   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.477186   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.485922   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.487281   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.487292   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.490425   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.491671   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.492643   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.503156   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	W1206 08:28:56.509424   10917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 08:28:56.509459   10917 retry.go:31] will retry after 257.593068ms: ssh: handshake failed: EOF
	I1206 08:28:56.509571   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.511577   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.513408   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	W1206 08:28:56.520705   10917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 08:28:56.520793   10917 retry.go:31] will retry after 260.871931ms: ssh: handshake failed: EOF
	I1206 08:28:56.525617   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	W1206 08:28:56.529191   10917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 08:28:56.529224   10917 retry.go:31] will retry after 148.947098ms: ssh: handshake failed: EOF
	I1206 08:28:56.529609   10917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:28:56.610415   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 08:28:56.621269   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 08:28:56.623293   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 08:28:56.641967   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 08:28:56.654251   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 08:28:56.656851   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 08:28:56.656882   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 08:28:56.669895   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:28:56.677136   10917 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 08:28:56.677168   10917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 08:28:56.682536   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 08:28:56.682561   10917 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 08:28:56.687568   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 08:28:56.691481   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 08:28:56.710855   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 08:28:56.710882   10917 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 08:28:56.719525   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 08:28:56.719621   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 08:28:56.738226   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 08:28:56.738253   10917 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 08:28:56.741466   10917 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 08:28:56.741554   10917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 08:28:56.742128   10917 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 08:28:56.744001   10917 node_ready.go:35] waiting up to 6m0s for node "addons-765040" to be "Ready" ...
	I1206 08:28:56.750307   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 08:28:56.750473   10917 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 08:28:56.788256   10917 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 08:28:56.788298   10917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 08:28:56.794220   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 08:28:56.797255   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 08:28:56.797281   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 08:28:56.811211   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 08:28:56.811297   10917 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 08:28:56.844903   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 08:28:56.844929   10917 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 08:28:56.859713   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 08:28:56.859797   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 08:28:56.860343   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 08:28:56.860420   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 08:28:56.901691   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 08:28:56.918777   10917 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:28:56.918869   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 08:28:56.922982   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 08:28:56.946527   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 08:28:56.946631   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 08:28:56.969893   10917 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 08:28:56.970018   10917 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 08:28:56.983427   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:28:56.991271   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 08:28:56.991383   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 08:28:57.003608   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:28:57.031681   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 08:28:57.031706   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 08:28:57.043957   10917 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 08:28:57.043980   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 08:28:57.082834   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 08:28:57.087714   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 08:28:57.087741   10917 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 08:28:57.138773   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 08:28:57.138863   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 08:28:57.189412   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 08:28:57.189434   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 08:28:57.258349   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 08:28:57.258377   10917 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 08:28:57.271379   10917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-765040" context rescaled to 1 replicas
	I1206 08:28:57.307387   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 08:28:57.799575   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.108057406s)
	I1206 08:28:57.799613   10917 addons.go:495] Verifying addon ingress=true in "addons-765040"
	I1206 08:28:57.799670   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005410428s)
	I1206 08:28:57.799811   10917 addons.go:495] Verifying addon metrics-server=true in "addons-765040"
	I1206 08:28:57.801348   10917 out.go:179] * Verifying ingress addon...
	I1206 08:28:57.801371   10917 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-765040 service yakd-dashboard -n yakd-dashboard
	
	I1206 08:28:57.803516   10917 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 08:28:57.806134   10917 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 08:28:57.806154   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:58.263076   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.279537762s)
	W1206 08:28:58.263133   10917 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 08:28:58.263140   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.259460835s)
	I1206 08:28:58.263160   10917 retry.go:31] will retry after 317.11688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 08:28:58.263189   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.18031332s)
	I1206 08:28:58.263208   10917 addons.go:495] Verifying addon registry=true in "addons-765040"
	I1206 08:28:58.263380   10917 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-765040"
	I1206 08:28:58.265209   10917 out.go:179] * Verifying registry addon...
	I1206 08:28:58.265213   10917 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 08:28:58.268174   10917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 08:28:58.268192   10917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 08:28:58.270610   10917 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 08:28:58.270628   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:58.271556   10917 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 08:28:58.271570   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:58.371711   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:58.580948   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1206 08:28:58.747569   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:28:58.771274   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:58.771360   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:58.806754   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:59.271637   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:59.271647   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:59.372932   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:59.771779   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:59.771874   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:59.806379   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:00.271461   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:00.271513   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:00.306885   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:00.771293   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:00.771323   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:00.806847   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:01.039394   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.458397401s)
	W1206 08:29:01.246853   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:01.271533   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:01.271648   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:01.372929   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:01.770940   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:01.771089   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:01.806479   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:02.271665   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:02.271676   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:02.307666   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:02.771497   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:02.771543   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:02.806294   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1206 08:29:03.247042   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:03.270975   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:03.271049   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:03.306300   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:03.771781   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:03.771831   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:03.806503   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:04.048687   10917 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 08:29:04.048751   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:29:04.066713   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:29:04.165647   10917 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 08:29:04.178252   10917 addons.go:239] Setting addon gcp-auth=true in "addons-765040"
	I1206 08:29:04.178301   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:29:04.178666   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:29:04.196450   10917 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 08:29:04.196495   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:29:04.214588   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:29:04.271593   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:04.271701   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:04.306911   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:04.307267   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:29:04.308806   10917 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 08:29:04.310038   10917 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 08:29:04.310050   10917 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 08:29:04.323122   10917 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 08:29:04.323143   10917 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 08:29:04.335638   10917 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 08:29:04.335656   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 08:29:04.347834   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 08:29:04.639918   10917 addons.go:495] Verifying addon gcp-auth=true in "addons-765040"
	I1206 08:29:04.641513   10917 out.go:179] * Verifying gcp-auth addon...
	I1206 08:29:04.643883   10917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 08:29:04.647266   10917 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 08:29:04.647291   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:04.771121   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:04.771201   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:04.806731   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:05.147697   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:05.247239   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:05.271646   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:05.271680   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:05.307023   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:05.646535   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:05.771207   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:05.771329   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:05.806743   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:06.147221   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:06.271272   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:06.271343   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:06.306708   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:06.647312   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:06.771582   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:06.771671   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:06.807099   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:07.146649   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:07.247412   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:07.270599   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:07.270733   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:07.306095   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:07.646849   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:07.770705   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:07.770738   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:07.806383   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:08.147118   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:08.270626   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:08.270633   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:08.305870   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:08.646634   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:08.771884   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:08.771972   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:08.806503   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:09.147033   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:09.270964   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:09.270983   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:09.306353   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:09.648102   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:09.746279   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:09.771641   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:09.771657   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:09.806167   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:10.146699   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:10.271546   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:10.271623   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:10.307059   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:10.646538   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:10.771556   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:10.771581   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:10.806915   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:11.146810   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:11.270772   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:11.270918   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:11.306188   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:11.646801   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:11.747274   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:11.770862   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:11.770886   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:11.806240   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:12.146952   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:12.270507   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:12.270570   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:12.306952   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:12.646440   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:12.771255   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:12.771276   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:12.806614   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:13.146967   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:13.270870   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:13.270983   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:13.306291   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:13.647303   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:13.770885   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:13.770957   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:13.806461   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:14.147283   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:14.246652   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:14.270983   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:14.271037   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:14.306595   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:14.647483   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:14.771607   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:14.771677   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:14.805849   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:15.146456   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:15.272039   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:15.272472   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:15.306969   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:15.646393   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:15.771186   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:15.771206   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:15.806559   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:16.147336   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:16.271004   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:16.271041   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:16.306557   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:16.646452   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:16.747259   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:16.771667   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:16.771789   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:16.806008   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:17.146453   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:17.271082   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:17.271166   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:17.306497   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:17.647169   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:17.770869   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:17.770873   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:17.806422   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:18.147012   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:18.270886   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:18.270900   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:18.306231   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:18.647068   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:18.770611   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:18.770632   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:18.805969   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:19.147100   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:19.246475   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:19.270646   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:19.270656   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:19.306109   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:19.647063   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:19.771547   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:19.771639   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:19.805945   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:20.146620   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:20.271484   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:20.271492   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:20.306898   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:20.646313   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:20.771876   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:20.771942   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:20.806484   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:21.146814   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:21.247446   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:21.270910   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:21.270942   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:21.306466   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:21.647281   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:21.771513   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:21.771558   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:21.807277   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:22.147385   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:22.271164   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:22.271209   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:22.306540   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:22.647278   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:22.771240   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:22.771335   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:22.807016   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:23.146448   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:23.271443   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:23.271490   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:23.306867   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:23.646916   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:23.747272   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:23.771578   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:23.771705   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:23.810456   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:24.147000   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:24.271132   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:24.271206   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:24.306976   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:24.646636   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:24.771408   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:24.771434   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:24.806800   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:25.146191   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:25.270889   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:25.270914   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:25.306894   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:25.646481   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:25.771431   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:25.771431   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:25.807079   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:26.146637   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:26.247263   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:26.270809   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:26.270881   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:26.306346   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:26.647053   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:26.770847   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:26.770869   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:26.806520   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:27.147435   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:27.271323   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:27.271420   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:27.306849   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:27.646379   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:27.771104   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:27.771225   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:27.806660   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:28.146344   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:28.271097   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:28.271253   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:28.306667   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:28.646234   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:28.746623   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:28.771164   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:28.771199   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:28.806721   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:29.147314   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:29.271558   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:29.271637   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:29.306281   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:29.647297   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:29.771240   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:29.771251   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:29.806565   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:30.147183   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:30.270841   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:30.270925   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:30.306302   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:30.646956   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:30.747447   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:30.770645   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:30.770806   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:30.806262   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:31.147043   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:31.271035   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:31.271054   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:31.306660   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:31.647251   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:31.771163   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:31.771227   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:31.806627   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:32.147406   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:32.271151   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:32.271211   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:32.306376   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:32.647135   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:32.770620   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:32.770710   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:32.806207   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:33.147071   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:33.246387   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:33.271053   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:33.271145   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:33.306552   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:33.647299   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:33.771039   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:33.771127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:33.806655   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:34.146371   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:34.271173   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:34.271195   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:34.307190   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:34.646268   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:34.770864   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:34.771025   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:34.806490   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:35.147127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:35.246558   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:35.271045   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:35.271036   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:35.306336   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:35.647137   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:35.771038   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:35.771056   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:35.806656   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:36.147271   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:36.270964   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:36.270972   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:36.306495   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:36.647140   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:36.771096   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:36.771131   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:36.806646   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:37.147095   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:37.246300   10917 node_ready.go:49] node "addons-765040" is "Ready"
	I1206 08:29:37.246328   10917 node_ready.go:38] duration metric: took 40.50230852s for node "addons-765040" to be "Ready" ...
	I1206 08:29:37.246342   10917 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:29:37.246399   10917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:29:37.259736   10917 api_server.go:72] duration metric: took 40.986504637s to wait for apiserver process to appear ...
	I1206 08:29:37.259760   10917 api_server.go:88] waiting for apiserver healthz status ...
	I1206 08:29:37.259776   10917 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 08:29:37.264676   10917 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 08:29:37.265627   10917 api_server.go:141] control plane version: v1.34.2
	I1206 08:29:37.265654   10917 api_server.go:131] duration metric: took 5.887155ms to wait for apiserver health ...
	I1206 08:29:37.265665   10917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 08:29:37.269833   10917 system_pods.go:59] 20 kube-system pods found
	I1206 08:29:37.269884   10917 system_pods.go:61] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.269898   10917 system_pods.go:61] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:37.269915   10917 system_pods.go:61] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.269927   10917 system_pods.go:61] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.269938   10917 system_pods.go:61] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.269949   10917 system_pods.go:61] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.269956   10917 system_pods.go:61] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.269963   10917 system_pods.go:61] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.269969   10917 system_pods.go:61] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.269982   10917 system_pods.go:61] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.269999   10917 system_pods.go:61] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.270010   10917 system_pods.go:61] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.270019   10917 system_pods.go:61] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.270032   10917 system_pods.go:61] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.270044   10917 system_pods.go:61] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.270055   10917 system_pods.go:61] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.270063   10917 system_pods.go:61] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.270075   10917 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.270088   10917 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.270098   10917 system_pods.go:61] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:37.270107   10917 system_pods.go:74] duration metric: took 4.434661ms to wait for pod list to return data ...
	I1206 08:29:37.270119   10917 default_sa.go:34] waiting for default service account to be created ...
	I1206 08:29:37.270579   10917 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 08:29:37.270597   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:37.270706   10917 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 08:29:37.270722   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:37.272115   10917 default_sa.go:45] found service account: "default"
	I1206 08:29:37.272136   10917 default_sa.go:55] duration metric: took 2.009516ms for default service account to be created ...
	I1206 08:29:37.272146   10917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 08:29:37.277710   10917 system_pods.go:86] 20 kube-system pods found
	I1206 08:29:37.277745   10917 system_pods.go:89] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.277757   10917 system_pods.go:89] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:37.277766   10917 system_pods.go:89] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.277775   10917 system_pods.go:89] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.277792   10917 system_pods.go:89] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.277798   10917 system_pods.go:89] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.277805   10917 system_pods.go:89] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.277814   10917 system_pods.go:89] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.277820   10917 system_pods.go:89] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.277832   10917 system_pods.go:89] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.277840   10917 system_pods.go:89] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.277846   10917 system_pods.go:89] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.277857   10917 system_pods.go:89] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.277867   10917 system_pods.go:89] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.277879   10917 system_pods.go:89] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.277886   10917 system_pods.go:89] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.277897   10917 system_pods.go:89] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.277905   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.277915   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.277924   10917 system_pods.go:89] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:37.277942   10917 retry.go:31] will retry after 235.392267ms: missing components: kube-dns
	I1206 08:29:37.368622   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:37.517978   10917 system_pods.go:86] 20 kube-system pods found
	I1206 08:29:37.518032   10917 system_pods.go:89] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.518042   10917 system_pods.go:89] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:37.518051   10917 system_pods.go:89] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.518059   10917 system_pods.go:89] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.518076   10917 system_pods.go:89] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.518086   10917 system_pods.go:89] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.518092   10917 system_pods.go:89] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.518099   10917 system_pods.go:89] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.518106   10917 system_pods.go:89] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.518116   10917 system_pods.go:89] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.518124   10917 system_pods.go:89] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.518131   10917 system_pods.go:89] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.518139   10917 system_pods.go:89] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.518150   10917 system_pods.go:89] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.518161   10917 system_pods.go:89] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.518171   10917 system_pods.go:89] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.518183   10917 system_pods.go:89] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.518192   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.518206   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.518220   10917 system_pods.go:89] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:37.518242   10917 retry.go:31] will retry after 268.797227ms: missing components: kube-dns
	I1206 08:29:37.646917   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:37.775700   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:37.775931   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:37.797227   10917 system_pods.go:86] 20 kube-system pods found
	I1206 08:29:37.797264   10917 system_pods.go:89] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.797272   10917 system_pods.go:89] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Running
	I1206 08:29:37.797283   10917 system_pods.go:89] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.797292   10917 system_pods.go:89] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.797303   10917 system_pods.go:89] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.797311   10917 system_pods.go:89] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.797317   10917 system_pods.go:89] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.797322   10917 system_pods.go:89] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.797328   10917 system_pods.go:89] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.797337   10917 system_pods.go:89] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.797341   10917 system_pods.go:89] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.797347   10917 system_pods.go:89] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.797356   10917 system_pods.go:89] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.797363   10917 system_pods.go:89] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.797373   10917 system_pods.go:89] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.797381   10917 system_pods.go:89] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.797390   10917 system_pods.go:89] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.797397   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.797407   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.797412   10917 system_pods.go:89] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Running
	I1206 08:29:37.797422   10917 system_pods.go:126] duration metric: took 525.270292ms to wait for k8s-apps to be running ...
	I1206 08:29:37.797432   10917 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 08:29:37.797483   10917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:29:37.825813   10917 system_svc.go:56] duration metric: took 28.373343ms WaitForService to wait for kubelet
	I1206 08:29:37.825847   10917 kubeadm.go:587] duration metric: took 41.552620172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:29:37.825870   10917 node_conditions.go:102] verifying NodePressure condition ...
	I1206 08:29:37.828891   10917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 08:29:37.828922   10917 node_conditions.go:123] node cpu capacity is 8
	I1206 08:29:37.828939   10917 node_conditions.go:105] duration metric: took 3.062685ms to run NodePressure ...
	I1206 08:29:37.828953   10917 start.go:242] waiting for startup goroutines ...
	I1206 08:29:37.875074   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:38.146887   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:38.271626   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:38.271640   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:38.307582   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:38.647930   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:38.772386   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:38.772451   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:38.873411   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:39.147256   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:39.271286   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:39.271371   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:39.307106   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:39.647785   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:39.773604   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:39.773898   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:39.831519   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:40.148694   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:40.271639   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:40.272020   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:40.306932   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:40.646847   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:40.772205   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:40.772261   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:40.807358   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:41.147512   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:41.271459   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:41.271642   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:41.307571   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:41.647851   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:41.774102   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:41.774324   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:41.808333   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:42.148976   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:42.272456   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:42.272980   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:42.307092   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:42.646821   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:42.772064   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:42.772072   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:42.806881   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:43.146863   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:43.271716   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:43.271957   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:43.307238   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:43.695490   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:43.771257   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:43.771344   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:43.809292   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:44.147170   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:44.271963   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:44.272476   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:44.307101   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:44.647012   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:44.772256   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:44.772316   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:44.807304   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:45.146711   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:45.271715   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:45.271844   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:45.307139   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:45.697016   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:45.822084   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:45.822123   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:45.822166   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:46.147658   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:46.272620   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:46.273294   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:46.307851   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:46.646843   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:46.772159   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:46.772489   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:46.806767   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:47.146728   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:47.271299   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:47.271385   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:47.306692   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:47.647127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:47.772267   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:47.772373   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:47.807158   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:48.147231   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:48.272133   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:48.272297   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:48.307128   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:48.647127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:48.772237   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:48.772278   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:48.806913   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:49.146732   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:49.271716   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:49.271917   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:49.306622   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:49.647753   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:49.771892   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:49.772063   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:49.807251   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:50.146890   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:50.271830   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:50.272095   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:50.306922   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:50.647739   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:50.771913   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:50.772065   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:50.842492   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:51.147977   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:51.271464   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:51.271624   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:51.307673   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:51.648044   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:51.772314   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:51.772356   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:51.807034   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:52.146927   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:52.271945   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:52.272036   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:52.372592   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:52.648201   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:52.771236   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:52.771550   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:52.807470   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:53.147740   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:53.271345   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:53.271523   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:53.306875   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:53.647142   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:53.773201   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:53.773327   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:53.807145   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:54.146605   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:54.271572   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:54.271835   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:54.306150   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:54.646612   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:54.771792   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:54.771838   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:54.807312   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:55.147350   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:55.271384   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:55.271509   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:55.307232   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:55.646838   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:55.772179   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:55.772222   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:55.806976   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:56.148324   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:56.271000   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:56.271308   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:56.307629   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:56.647897   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:56.771867   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:56.772077   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:56.806962   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:57.146894   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:57.271925   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:57.271979   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:57.373000   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:57.647050   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:57.771971   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:57.772134   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:57.806928   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:58.146765   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:58.271857   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:58.271922   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:58.372753   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:58.646422   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:58.770796   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:58.770960   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:58.806590   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.147835   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:59.271752   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:59.271862   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:59.306548   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.647794   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:59.771809   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:59.771947   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:59.806704   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:00.147779   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:00.271462   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:00.271624   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.306935   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:00.647016   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:00.771714   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:00.771764   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.806213   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:01.147404   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:01.271134   10917 kapi.go:107] duration metric: took 1m3.002946556s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 08:30:01.271183   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:01.306601   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:01.648044   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:01.772493   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:01.808643   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:02.148107   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:02.272110   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:02.306874   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:02.647412   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:02.771675   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:02.807643   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:03.147695   10917 kapi.go:107] duration metric: took 58.503812966s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 08:30:03.149198   10917 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-765040 cluster.
	I1206 08:30:03.150474   10917 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 08:30:03.151650   10917 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 08:30:03.272643   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:03.308372   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:03.771966   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:03.807946   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:04.272077   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:04.307974   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:04.772470   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:04.807306   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:05.271334   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:05.307128   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:05.772507   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:05.872669   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:06.271651   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:06.307927   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:06.772359   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:06.807259   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:07.271716   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:07.372111   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:07.772141   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:07.807063   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:08.272147   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:08.306320   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:08.771596   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:08.807215   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:09.272235   10917 kapi.go:107] duration metric: took 1m11.004038288s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 08:30:09.306663   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:09.807152   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:10.344706   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:10.806842   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:11.307647   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:11.807767   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:12.306980   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:12.807574   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:13.306750   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:13.807064   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:14.307441   10917 kapi.go:107] duration metric: took 1m16.503920461s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 08:30:14.309252   10917 out.go:179] * Enabled addons: registry-creds, cloud-spanner, nvidia-device-plugin, ingress-dns, inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1206 08:30:14.310517   10917 addons.go:530] duration metric: took 1m18.037246923s for enable addons: enabled=[registry-creds cloud-spanner nvidia-device-plugin ingress-dns inspektor-gadget amd-gpu-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1206 08:30:14.310560   10917 start.go:247] waiting for cluster config update ...
	I1206 08:30:14.310579   10917 start.go:256] writing updated cluster config ...
	I1206 08:30:14.310909   10917 ssh_runner.go:195] Run: rm -f paused
	I1206 08:30:14.314945   10917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 08:30:14.318704   10917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qjx25" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.323423   10917 pod_ready.go:94] pod "coredns-66bc5c9577-qjx25" is "Ready"
	I1206 08:30:14.323446   10917 pod_ready.go:86] duration metric: took 4.713787ms for pod "coredns-66bc5c9577-qjx25" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.325284   10917 pod_ready.go:83] waiting for pod "etcd-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.328574   10917 pod_ready.go:94] pod "etcd-addons-765040" is "Ready"
	I1206 08:30:14.328597   10917 pod_ready.go:86] duration metric: took 3.287521ms for pod "etcd-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.330428   10917 pod_ready.go:83] waiting for pod "kube-apiserver-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.333679   10917 pod_ready.go:94] pod "kube-apiserver-addons-765040" is "Ready"
	I1206 08:30:14.333703   10917 pod_ready.go:86] duration metric: took 3.251271ms for pod "kube-apiserver-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.335392   10917 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.718466   10917 pod_ready.go:94] pod "kube-controller-manager-addons-765040" is "Ready"
	I1206 08:30:14.718500   10917 pod_ready.go:86] duration metric: took 383.085368ms for pod "kube-controller-manager-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.918615   10917 pod_ready.go:83] waiting for pod "kube-proxy-zbjfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.319167   10917 pod_ready.go:94] pod "kube-proxy-zbjfm" is "Ready"
	I1206 08:30:15.319196   10917 pod_ready.go:86] duration metric: took 400.552236ms for pod "kube-proxy-zbjfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.519372   10917 pod_ready.go:83] waiting for pod "kube-scheduler-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.918836   10917 pod_ready.go:94] pod "kube-scheduler-addons-765040" is "Ready"
	I1206 08:30:15.918867   10917 pod_ready.go:86] duration metric: took 399.469373ms for pod "kube-scheduler-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.918878   10917 pod_ready.go:40] duration metric: took 1.603910103s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 08:30:15.963113   10917 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 08:30:15.964918   10917 out.go:179] * Done! kubectl is now configured to use "addons-765040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.928421106Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-hkd8l/POD" id=c8f35350-7a0b-491c-8ef1-9e5aa8c099a6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.928508023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.935065077Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hkd8l Namespace:default ID:fb11a13d34d66299efa2a2dd14b2b2485227fdd5c24323319c73cb1b5a972582 UID:61e72960-6f8d-4f7d-ba2b-f38eaef7714b NetNS:/var/run/netns/086d15de-661a-40c5-a338-af893b5515df Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a2bb8}] Aliases:map[]}"
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.935111201Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-hkd8l to CNI network \"kindnet\" (type=ptp)"
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.946841747Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hkd8l Namespace:default ID:fb11a13d34d66299efa2a2dd14b2b2485227fdd5c24323319c73cb1b5a972582 UID:61e72960-6f8d-4f7d-ba2b-f38eaef7714b NetNS:/var/run/netns/086d15de-661a-40c5-a338-af893b5515df Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a2bb8}] Aliases:map[]}"
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.947075254Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-hkd8l for CNI network kindnet (type=ptp)"
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.948371424Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.949555783Z" level=info msg="Ran pod sandbox fb11a13d34d66299efa2a2dd14b2b2485227fdd5c24323319c73cb1b5a972582 with infra container: default/hello-world-app-5d498dc89-hkd8l/POD" id=c8f35350-7a0b-491c-8ef1-9e5aa8c099a6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.951062654Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=890134e8-5590-45bf-8431-d2feeb7e19b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.951211821Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=890134e8-5590-45bf-8431-d2feeb7e19b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.951263544Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=890134e8-5590-45bf-8431-d2feeb7e19b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.951944237Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=06786c22-cd3f-465a-b3d9-de9da88f32b2 name=/runtime.v1.ImageService/PullImage
	Dec 06 08:32:56 addons-765040 crio[770]: time="2025-12-06T08:32:56.957719817Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.319482991Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=06786c22-cd3f-465a-b3d9-de9da88f32b2 name=/runtime.v1.ImageService/PullImage
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.320109864Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5bbc53a9-8685-41fa-ac7f-9060a7a23e41 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.321799965Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=40a9e137-56a0-478e-8557-ca1240beb5ee name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.325547527Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-hkd8l/hello-world-app" id=15fb2ab7-4fe5-4f34-af41-9e19d1538909 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.325655671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.331164039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.331324894Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3ab89b6c157ceab582e0978a96bc1cead56166f50e75d2d723ef61589e6b38c4/merged/etc/passwd: no such file or directory"
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.331347224Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3ab89b6c157ceab582e0978a96bc1cead56166f50e75d2d723ef61589e6b38c4/merged/etc/group: no such file or directory"
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.331557544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.368357704Z" level=info msg="Created container 638da9754ac4bb31eb41408458ebfab1456960c980778eddb7c93c2d7f02c296: default/hello-world-app-5d498dc89-hkd8l/hello-world-app" id=15fb2ab7-4fe5-4f34-af41-9e19d1538909 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.369157384Z" level=info msg="Starting container: 638da9754ac4bb31eb41408458ebfab1456960c980778eddb7c93c2d7f02c296" id=0a278f25-ddac-4324-8c3a-3c8d28e8e3a9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 08:32:57 addons-765040 crio[770]: time="2025-12-06T08:32:57.372121282Z" level=info msg="Started container" PID=9387 containerID=638da9754ac4bb31eb41408458ebfab1456960c980778eddb7c93c2d7f02c296 description=default/hello-world-app-5d498dc89-hkd8l/hello-world-app id=0a278f25-ddac-4324-8c3a-3c8d28e8e3a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb11a13d34d66299efa2a2dd14b2b2485227fdd5c24323319c73cb1b5a972582
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	638da9754ac4b       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   fb11a13d34d66       hello-world-app-5d498dc89-hkd8l             default
	cdba2594455ea       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   4c0dc7fbc2bdb       registry-creds-764b6fb674-jxk6v             kube-system
	119aa8c250859       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   aa17ab66d9bb7       nginx                                       default
	5d89ccab7bf00       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   e1e54352ec8d1       busybox                                     default
	375681b28101f       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   73af3261e197f       ingress-nginx-controller-85d4c799dd-k228z   ingress-nginx
	465f1ce06f190       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago            Exited              patch                                    2                   3ec637d5437e4       ingress-nginx-admission-patch-f6h26         ingress-nginx
	fa9f9971a7530       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                    kube-system
	b9fb9ebbc4e81       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                    kube-system
	673c8f827d57b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                    kube-system
	4396f604e4ece       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                    kube-system
	a1b45309ac30c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   57851c0aec553       gadget-qs29w                                gadget
	1be3dd273f2cc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                    kube-system
	e473ff4646804       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   0609a5d39a54f       gcp-auth-78565c9fb4-jjvdb                   gcp-auth
	885f97664324b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   6cc8b635c7337       registry-proxy-62qx6                        kube-system
	d68a7f31cdd6c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   77bb6fa77ec91       snapshot-controller-7d9fbc56b8-wbvlw        kube-system
	6a470b7cce4a2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                    kube-system
	6ee534b6b0267       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   11ff803294b05       nvidia-device-plugin-daemonset-rxnr5        kube-system
	94882f78bdf8a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   14ae37751bb36       amd-gpu-device-plugin-vdlbw                 kube-system
	a19b0e90b5613       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   b5d56424517c7       ingress-nginx-admission-create-xh7gb        ingress-nginx
	354d0a526c419       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   686608e91534d       csi-hostpath-resizer-0                      kube-system
	aeeba8f92bca8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   51dfd9569bb03       local-path-provisioner-648f6765c9-hk7zm     local-path-storage
	d077eaf71426e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   cb2c2964b5bda       snapshot-controller-7d9fbc56b8-pcq6s        kube-system
	8c377935e6b6a       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   85a69ad9a0a91       cloud-spanner-emulator-5bdddb765-82xzx      default
	698b578cb2d85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   2d1740c8e7633       csi-hostpath-attacher-0                     kube-system
	0cb647e368899       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   d3ba6e7892f3d       kube-ingress-dns-minikube                   kube-system
	5f25ed43f715b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   c1ec8a2800152       yakd-dashboard-5ff678cb9-m627q              yakd-dashboard
	1e8b8db988d1b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   5884350e2a300       metrics-server-85b7d694d7-zrbd8             kube-system
	18c6142b8428d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   9a6291b670694       registry-6b586f9694-cc7hl                   kube-system
	248f63d58002c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   0ef70e92773f8       storage-provisioner                         kube-system
	7f04ddcba299d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   33c79b605f13d       coredns-66bc5c9577-qjx25                    kube-system
	88a0bf3b6769d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   5ed02bf7fa615       kindnet-v4khk                               kube-system
	c9ca4911d0b8a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   7ca58eebd5a52       kube-proxy-zbjfm                            kube-system
	ac0e422b4a248       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   291defde480de       kube-apiserver-addons-765040                kube-system
	9164f996c22b8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   841dd4e78402c       etcd-addons-765040                          kube-system
	c72849d4fdd71       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   bbf3390ea962c       kube-controller-manager-addons-765040       kube-system
	a5a7b4678c49d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   02c34c53efe3a       kube-scheduler-addons-765040                kube-system
	
	
	==> coredns [7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5] <==
	[INFO] 10.244.0.20:35534 - 52219 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119171s
	[INFO] 10.244.0.20:42723 - 7065 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00819394s
	[INFO] 10.244.0.20:52803 - 13048 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008959808s
	[INFO] 10.244.0.20:41906 - 16255 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006004307s
	[INFO] 10.244.0.20:58772 - 57174 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008217808s
	[INFO] 10.244.0.20:54584 - 58165 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004600849s
	[INFO] 10.244.0.20:44435 - 34526 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007939785s
	[INFO] 10.244.0.20:60426 - 41686 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000874225s
	[INFO] 10.244.0.20:38647 - 33553 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001385594s
	[INFO] 10.244.0.26:43307 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238592s
	[INFO] 10.244.0.26:59829 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016478s
	[INFO] 10.244.0.29:36266 - 53317 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000207884s
	[INFO] 10.244.0.29:50279 - 15933 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000301833s
	[INFO] 10.244.0.29:45950 - 5931 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000119072s
	[INFO] 10.244.0.29:51250 - 43476 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000193508s
	[INFO] 10.244.0.29:33493 - 23128 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00011734s
	[INFO] 10.244.0.29:43854 - 24681 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000141369s
	[INFO] 10.244.0.29:43632 - 56752 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007212626s
	[INFO] 10.244.0.29:48614 - 58305 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.008022412s
	[INFO] 10.244.0.29:40797 - 25353 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00551143s
	[INFO] 10.244.0.29:36691 - 53021 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005676789s
	[INFO] 10.244.0.29:36643 - 39000 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005916016s
	[INFO] 10.244.0.29:44779 - 57843 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.007322186s
	[INFO] 10.244.0.29:38877 - 39172 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001647099s
	[INFO] 10.244.0.29:44837 - 17304 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002614849s
	
	
	==> describe nodes <==
	Name:               addons-765040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-765040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=addons-765040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T08_28_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-765040
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-765040"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 08:28:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-765040
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 08:32:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 08:32:56 +0000   Sat, 06 Dec 2025 08:28:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 08:32:56 +0000   Sat, 06 Dec 2025 08:28:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 08:32:56 +0000   Sat, 06 Dec 2025 08:28:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 08:32:56 +0000   Sat, 06 Dec 2025 08:29:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-765040
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                83923667-5335-4e95-b76a-aad86daca2a8
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-5bdddb765-82xzx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  default                     hello-world-app-5d498dc89-hkd8l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-qs29w                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  gcp-auth                    gcp-auth-78565c9fb4-jjvdb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-k228z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m1s
	  kube-system                 amd-gpu-device-plugin-vdlbw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 coredns-66bc5c9577-qjx25                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 csi-hostpathplugin-2bz69                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 etcd-addons-765040                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m8s
	  kube-system                 kindnet-v4khk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m3s
	  kube-system                 kube-apiserver-addons-765040                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-addons-765040        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-zbjfm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-addons-765040                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 metrics-server-85b7d694d7-zrbd8              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m1s
	  kube-system                 nvidia-device-plugin-daemonset-rxnr5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 registry-6b586f9694-cc7hl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-creds-764b6fb674-jxk6v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-proxy-62qx6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-pcq6s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 snapshot-controller-7d9fbc56b8-wbvlw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  local-path-storage          local-path-provisioner-648f6765c9-hk7zm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-m627q               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  Starting                 4m8s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s   kubelet          Node addons-765040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s   kubelet          Node addons-765040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s   kubelet          Node addons-765040 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m3s   node-controller  Node addons-765040 event: Registered Node addons-765040 in Controller
	  Normal  NodeReady                3m22s  kubelet          Node addons-765040 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093] <==
	{"level":"warn","ts":"2025-12-06T08:28:47.524178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.532133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.539285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.548202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.555875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.563259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.569463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.577136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.584127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.595437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.603587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.610193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.617106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.636169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.642675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.649615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:58.802523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:58.809375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:29:25.117133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:29:25.124582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:29:25.144930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T08:29:45.695462Z","caller":"traceutil/trace.go:172","msg":"trace[50646713] linearizableReadLoop","detail":"{readStateIndex:1013; appliedIndex:1013; }","duration":"143.380855ms","start":"2025-12-06T08:29:45.552063Z","end":"2025-12-06T08:29:45.695444Z","steps":["trace[50646713] 'read index received'  (duration: 143.374553ms)","trace[50646713] 'applied index is now lower than readState.Index'  (duration: 4.846µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T08:29:45.695580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.505785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:29:45.695625Z","caller":"traceutil/trace.go:172","msg":"trace[1926135458] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:985; }","duration":"143.565896ms","start":"2025-12-06T08:29:45.552054Z","end":"2025-12-06T08:29:45.695620Z","steps":["trace[1926135458] 'agreement among raft nodes before linearized reading'  (duration: 143.479927ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:29:45.695697Z","caller":"traceutil/trace.go:172","msg":"trace[1676551139] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"154.44511ms","start":"2025-12-06T08:29:45.541233Z","end":"2025-12-06T08:29:45.695678Z","steps":["trace[1676551139] 'process raft request'  (duration: 154.275401ms)"],"step_count":1}
	
	
	==> gcp-auth [e473ff4646804b1d1dbe5234b3b5cf91c9ceacf662f64eb9094c2601c86e59b2] <==
	2025/12/06 08:30:02 GCP Auth Webhook started!
	2025/12/06 08:30:16 Ready to marshal response ...
	2025/12/06 08:30:16 Ready to write response ...
	2025/12/06 08:30:16 Ready to marshal response ...
	2025/12/06 08:30:16 Ready to write response ...
	2025/12/06 08:30:16 Ready to marshal response ...
	2025/12/06 08:30:16 Ready to write response ...
	2025/12/06 08:30:31 Ready to marshal response ...
	2025/12/06 08:30:31 Ready to write response ...
	2025/12/06 08:30:31 Ready to marshal response ...
	2025/12/06 08:30:31 Ready to write response ...
	2025/12/06 08:30:31 Ready to marshal response ...
	2025/12/06 08:30:31 Ready to write response ...
	2025/12/06 08:30:36 Ready to marshal response ...
	2025/12/06 08:30:36 Ready to write response ...
	2025/12/06 08:30:38 Ready to marshal response ...
	2025/12/06 08:30:38 Ready to write response ...
	2025/12/06 08:30:41 Ready to marshal response ...
	2025/12/06 08:30:41 Ready to write response ...
	2025/12/06 08:30:55 Ready to marshal response ...
	2025/12/06 08:30:55 Ready to write response ...
	2025/12/06 08:32:56 Ready to marshal response ...
	2025/12/06 08:32:56 Ready to write response ...
	
	
	==> kernel <==
	 08:32:58 up 15 min,  0 user,  load average: 0.34, 0.52, 0.26
	Linux addons-765040 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f] <==
	I1206 08:30:56.670489       1 main.go:301] handling current node
	I1206 08:31:06.670920       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:31:06.670959       1 main.go:301] handling current node
	I1206 08:31:16.679023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:31:16.679058       1 main.go:301] handling current node
	I1206 08:31:26.675769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:31:26.675798       1 main.go:301] handling current node
	I1206 08:31:36.671089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:31:36.671118       1 main.go:301] handling current node
	I1206 08:31:46.674974       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:31:46.675027       1 main.go:301] handling current node
	I1206 08:31:56.677682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:31:56.677712       1 main.go:301] handling current node
	I1206 08:32:06.677148       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:32:06.677192       1 main.go:301] handling current node
	I1206 08:32:16.671940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:32:16.671978       1 main.go:301] handling current node
	I1206 08:32:26.670975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:32:26.671053       1 main.go:301] handling current node
	I1206 08:32:36.670091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:32:36.670122       1 main.go:301] handling current node
	I1206 08:32:46.670329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:32:46.670367       1 main.go:301] handling current node
	I1206 08:32:56.670292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:32:56.670329       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 08:29:41.857057       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.859130       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.863904       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.885103       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.927305       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:42.008945       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:42.170341       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	W1206 08:29:42.857518       1 handler_proxy.go:99] no RequestInfo found in the context
	W1206 08:29:42.857548       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 08:29:42.857585       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1206 08:29:42.857608       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1206 08:29:42.857610       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 08:29:42.858739       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 08:29:43.584752       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 08:30:24.636175       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38666: use of closed network connection
	E1206 08:30:24.783021       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38698: use of closed network connection
	I1206 08:30:31.550741       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 08:30:31.724648       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.246.28"}
	I1206 08:30:46.408801       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 08:32:56.698184       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.246.16"}
	
	
	==> kube-controller-manager [c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda] <==
	I1206 08:28:55.101874       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 08:28:55.102348       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 08:28:55.102376       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 08:28:55.102643       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 08:28:55.103679       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 08:28:55.104902       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 08:28:55.104921       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 08:28:55.105006       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 08:28:55.105056       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 08:28:55.105062       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 08:28:55.105069       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 08:28:55.107915       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 08:28:55.110273       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 08:28:55.112941       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-765040" podCIDRs=["10.244.0.0/24"]
	I1206 08:28:55.116147       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 08:28:55.126616       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 08:28:57.546211       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 08:29:25.111763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 08:29:25.111898       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 08:29:25.111935       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 08:29:25.135434       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 08:29:25.139109       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 08:29:25.212263       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 08:29:25.239949       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 08:29:40.056911       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc] <==
	I1206 08:28:56.215030       1 server_linux.go:53] "Using iptables proxy"
	I1206 08:28:56.293538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 08:28:56.404192       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 08:28:56.404486       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 08:28:56.404586       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 08:28:56.576236       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 08:28:56.576303       1 server_linux.go:132] "Using iptables Proxier"
	I1206 08:28:56.585389       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 08:28:56.591758       1 server.go:527] "Version info" version="v1.34.2"
	I1206 08:28:56.591792       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 08:28:56.595682       1 config.go:200] "Starting service config controller"
	I1206 08:28:56.595709       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 08:28:56.595739       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 08:28:56.595754       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 08:28:56.595777       1 config.go:106] "Starting endpoint slice config controller"
	I1206 08:28:56.595783       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 08:28:56.595958       1 config.go:309] "Starting node config controller"
	I1206 08:28:56.596032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 08:28:56.596086       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 08:28:56.695904       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 08:28:56.698067       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 08:28:56.698096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d] <==
	E1206 08:28:48.103545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:28:48.103659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:28:48.103680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 08:28:48.103798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:28:48.103842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 08:28:48.103872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:28:48.104030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:28:48.104083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:28:48.104091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 08:28:48.104090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:28:48.104176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 08:28:48.104179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 08:28:48.104190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:28:48.104179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 08:28:48.104243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 08:28:48.942382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:28:48.951502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:28:48.979909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:28:49.090391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:28:49.136006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 08:28:49.144974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 08:28:49.269342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:28:49.312596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:28:49.317589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1206 08:28:49.701588       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 08:30:57 addons-765040 kubelet[1295]: I1206 08:30:57.205880    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.963609819 podStartE2EDuration="2.205861254s" podCreationTimestamp="2025-12-06 08:30:55 +0000 UTC" firstStartedPulling="2025-12-06 08:30:55.968338598 +0000 UTC m=+125.406868583" lastFinishedPulling="2025-12-06 08:30:56.210590037 +0000 UTC m=+125.649120018" observedRunningTime="2025-12-06 08:30:57.205021691 +0000 UTC m=+126.643551685" watchObservedRunningTime="2025-12-06 08:30:57.205861254 +0000 UTC m=+126.644391256"
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.956695    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc882b36-6341-455a-bf2a-0d70ce42ac3b-gcp-creds\") pod \"fc882b36-6341-455a-bf2a-0d70ce42ac3b\" (UID: \"fc882b36-6341-455a-bf2a-0d70ce42ac3b\") "
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.956832    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e1b80b1e-d27d-11f0-849d-2ef9a20420a0\") pod \"fc882b36-6341-455a-bf2a-0d70ce42ac3b\" (UID: \"fc882b36-6341-455a-bf2a-0d70ce42ac3b\") "
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.956860    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv7z6\" (UniqueName: \"kubernetes.io/projected/fc882b36-6341-455a-bf2a-0d70ce42ac3b-kube-api-access-kv7z6\") pod \"fc882b36-6341-455a-bf2a-0d70ce42ac3b\" (UID: \"fc882b36-6341-455a-bf2a-0d70ce42ac3b\") "
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.956851    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc882b36-6341-455a-bf2a-0d70ce42ac3b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "fc882b36-6341-455a-bf2a-0d70ce42ac3b" (UID: "fc882b36-6341-455a-bf2a-0d70ce42ac3b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.956969    1295 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc882b36-6341-455a-bf2a-0d70ce42ac3b-gcp-creds\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.959398    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc882b36-6341-455a-bf2a-0d70ce42ac3b-kube-api-access-kv7z6" (OuterVolumeSpecName: "kube-api-access-kv7z6") pod "fc882b36-6341-455a-bf2a-0d70ce42ac3b" (UID: "fc882b36-6341-455a-bf2a-0d70ce42ac3b"). InnerVolumeSpecName "kube-api-access-kv7z6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 08:31:02 addons-765040 kubelet[1295]: I1206 08:31:02.960004    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^e1b80b1e-d27d-11f0-849d-2ef9a20420a0" (OuterVolumeSpecName: "task-pv-storage") pod "fc882b36-6341-455a-bf2a-0d70ce42ac3b" (UID: "fc882b36-6341-455a-bf2a-0d70ce42ac3b"). InnerVolumeSpecName "pvc-61b9119b-4471-4544-8f32-4a2bc0510d11". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.058572    1295 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-61b9119b-4471-4544-8f32-4a2bc0510d11\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e1b80b1e-d27d-11f0-849d-2ef9a20420a0\") on node \"addons-765040\" "
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.058624    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kv7z6\" (UniqueName: \"kubernetes.io/projected/fc882b36-6341-455a-bf2a-0d70ce42ac3b-kube-api-access-kv7z6\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.064137    1295 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-61b9119b-4471-4544-8f32-4a2bc0510d11" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^e1b80b1e-d27d-11f0-849d-2ef9a20420a0") on node "addons-765040"
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.159596    1295 reconciler_common.go:299] "Volume detached for volume \"pvc-61b9119b-4471-4544-8f32-4a2bc0510d11\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e1b80b1e-d27d-11f0-849d-2ef9a20420a0\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.219857    1295 scope.go:117] "RemoveContainer" containerID="56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07"
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.229710    1295 scope.go:117] "RemoveContainer" containerID="56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07"
	Dec 06 08:31:03 addons-765040 kubelet[1295]: E1206 08:31:03.230132    1295 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07\": container with ID starting with 56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07 not found: ID does not exist" containerID="56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07"
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.230179    1295 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07"} err="failed to get container status \"56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07\": rpc error: code = NotFound desc = could not find container \"56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07\": container with ID starting with 56b8641f177d142ab3a2b12d62068ab493f07d8963f80360f8c544ce51934b07 not found: ID does not exist"
	Dec 06 08:31:03 addons-765040 kubelet[1295]: I1206 08:31:03.641108    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vdlbw" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:31:04 addons-765040 kubelet[1295]: I1206 08:31:04.643507    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc882b36-6341-455a-bf2a-0d70ce42ac3b" path="/var/lib/kubelet/pods/fc882b36-6341-455a-bf2a-0d70ce42ac3b/volumes"
	Dec 06 08:31:16 addons-765040 kubelet[1295]: I1206 08:31:16.642148    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rxnr5" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:31:26 addons-765040 kubelet[1295]: I1206 08:31:26.641120    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-62qx6" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:32:19 addons-765040 kubelet[1295]: I1206 08:32:19.641101    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rxnr5" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:32:23 addons-765040 kubelet[1295]: I1206 08:32:23.641254    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vdlbw" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:32:35 addons-765040 kubelet[1295]: I1206 08:32:35.640833    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-62qx6" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:32:56 addons-765040 kubelet[1295]: I1206 08:32:56.803353    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/61e72960-6f8d-4f7d-ba2b-f38eaef7714b-gcp-creds\") pod \"hello-world-app-5d498dc89-hkd8l\" (UID: \"61e72960-6f8d-4f7d-ba2b-f38eaef7714b\") " pod="default/hello-world-app-5d498dc89-hkd8l"
	Dec 06 08:32:56 addons-765040 kubelet[1295]: I1206 08:32:56.803434    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xjcb\" (UniqueName: \"kubernetes.io/projected/61e72960-6f8d-4f7d-ba2b-f38eaef7714b-kube-api-access-8xjcb\") pod \"hello-world-app-5d498dc89-hkd8l\" (UID: \"61e72960-6f8d-4f7d-ba2b-f38eaef7714b\") " pod="default/hello-world-app-5d498dc89-hkd8l"
	
	
	==> storage-provisioner [248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762] <==
	W1206 08:32:34.342577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:36.345391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:36.349021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:38.352262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:38.356111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:40.359117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:40.363127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:42.365871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:42.369467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:44.371851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:44.376823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:46.379954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:46.384304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:48.387644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:48.391451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:50.394460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:50.398194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:52.400777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:52.404532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:54.407285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:54.410866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:56.413543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:56.418443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:58.422257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:32:58.427021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-765040 -n addons-765040
helpers_test.go:269: (dbg) Run:  kubectl --context addons-765040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-765040 describe pod ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-765040 describe pod ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26: exit status 1 (55.280932ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xh7gb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f6h26" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-765040 describe pod ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (237.29502ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:32:59.144105   25052 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:32:59.144360   25052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:32:59.144370   25052 out.go:374] Setting ErrFile to fd 2...
	I1206 08:32:59.144374   25052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:32:59.144563   25052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:32:59.144805   25052 mustload.go:66] Loading cluster: addons-765040
	I1206 08:32:59.145743   25052 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:32:59.145789   25052 addons.go:622] checking whether the cluster is paused
	I1206 08:32:59.146136   25052 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:32:59.146345   25052 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:32:59.146833   25052 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:32:59.165028   25052 ssh_runner.go:195] Run: systemctl --version
	I1206 08:32:59.165087   25052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:32:59.182310   25052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:32:59.274437   25052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:32:59.274506   25052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:32:59.303151   25052 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:32:59.303174   25052 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:32:59.303180   25052 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:32:59.303184   25052 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:32:59.303189   25052 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:32:59.303195   25052 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:32:59.303199   25052 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:32:59.303204   25052 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:32:59.303209   25052 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:32:59.303217   25052 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:32:59.303227   25052 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:32:59.303233   25052 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:32:59.303239   25052 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:32:59.303245   25052 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:32:59.303252   25052 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:32:59.303259   25052 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:32:59.303267   25052 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:32:59.303272   25052 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:32:59.303278   25052 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:32:59.303280   25052 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:32:59.303283   25052 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:32:59.303286   25052 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:32:59.303290   25052 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:32:59.303295   25052 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:32:59.303298   25052 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:32:59.303301   25052 cri.go:89] found id: ""
	I1206 08:32:59.303347   25052 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:32:59.318326   25052 out.go:203] 
	W1206 08:32:59.319751   25052 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:32:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:32:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:32:59.319777   25052 out.go:285] * 
	* 
	W1206 08:32:59.322807   25052 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:32:59.324218   25052 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable ingress --alsologtostderr -v=1: exit status 11 (240.310435ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:32:59.384922   25114 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:32:59.385081   25114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:32:59.385095   25114 out.go:374] Setting ErrFile to fd 2...
	I1206 08:32:59.385099   25114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:32:59.385318   25114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:32:59.385636   25114 mustload.go:66] Loading cluster: addons-765040
	I1206 08:32:59.386005   25114 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:32:59.386026   25114 addons.go:622] checking whether the cluster is paused
	I1206 08:32:59.386161   25114 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:32:59.386181   25114 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:32:59.386614   25114 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:32:59.404499   25114 ssh_runner.go:195] Run: systemctl --version
	I1206 08:32:59.404546   25114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:32:59.423170   25114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:32:59.514921   25114 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:32:59.514976   25114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:32:59.544091   25114 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:32:59.544111   25114 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:32:59.544122   25114 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:32:59.544126   25114 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:32:59.544131   25114 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:32:59.544137   25114 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:32:59.544141   25114 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:32:59.544146   25114 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:32:59.544150   25114 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:32:59.544164   25114 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:32:59.544174   25114 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:32:59.544179   25114 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:32:59.544186   25114 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:32:59.544191   25114 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:32:59.544199   25114 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:32:59.544207   25114 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:32:59.544215   25114 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:32:59.544223   25114 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:32:59.544227   25114 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:32:59.544232   25114 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:32:59.544240   25114 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:32:59.544247   25114 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:32:59.544252   25114 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:32:59.544260   25114 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:32:59.544264   25114 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:32:59.544267   25114 cri.go:89] found id: ""
	I1206 08:32:59.544307   25114 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:32:59.558826   25114 out.go:203] 
	W1206 08:32:59.560185   25114 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:32:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:32:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:32:59.560212   25114 out.go:285] * 
	* 
	W1206 08:32:59.563326   25114 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:32:59.564721   25114 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qs29w" [81bca283-0424-4ceb-ab08-1746096be210] Running
I1206 08:30:25.044919    9158 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 08:30:25.044936    9158 kapi.go:107] duration metric: took 5.676807ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002537813s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (256.383135ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:31.099611   19327 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:31.099871   19327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:31.099880   19327 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:31.099885   19327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:31.100122   19327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:31.100378   19327 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:31.100702   19327 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:31.100723   19327 addons.go:622] checking whether the cluster is paused
	I1206 08:30:31.100820   19327 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:31.100841   19327 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:31.101335   19327 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:31.122783   19327 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:31.122843   19327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:31.139879   19327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:31.233084   19327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:31.233185   19327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:31.266020   19327 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:31.266051   19327 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:31.266057   19327 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:31.266062   19327 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:31.266066   19327 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:31.266072   19327 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:31.266080   19327 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:31.266084   19327 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:31.266090   19327 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:31.266097   19327 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:31.266108   19327 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:31.266119   19327 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:31.266123   19327 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:31.266127   19327 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:31.266132   19327 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:31.266146   19327 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:31.266152   19327 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:31.266159   19327 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:31.266163   19327 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:31.266167   19327 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:31.266171   19327 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:31.266176   19327 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:31.266181   19327 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:31.266185   19327 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:31.266189   19327 cri.go:89] found id: ""
	I1206 08:30:31.266231   19327 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:31.281151   19327 out.go:203] 
	W1206 08:30:31.282727   19327 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:31.282748   19327 out.go:285] * 
	* 
	W1206 08:30:31.287951   19327 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:31.289484   19327 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.28274ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00398874s
addons_test.go:463: (dbg) Run:  kubectl --context addons-765040 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (242.757984ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:54.824114   22538 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:54.824459   22538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:54.824474   22538 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:54.824481   22538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:54.825482   22538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:54.825900   22538 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:54.826745   22538 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:54.826773   22538 addons.go:622] checking whether the cluster is paused
	I1206 08:30:54.826865   22538 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:54.826881   22538 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:54.827291   22538 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:54.848146   22538 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:54.848211   22538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:54.865763   22538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:54.957552   22538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:54.957629   22538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:54.986769   22538 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:30:54.986790   22538 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:54.986794   22538 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:54.986798   22538 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:54.986801   22538 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:54.986806   22538 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:54.986809   22538 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:54.986812   22538 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:54.986815   22538 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:54.986820   22538 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:54.986823   22538 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:54.986826   22538 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:54.986830   22538 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:54.986833   22538 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:54.986836   22538 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:54.986844   22538 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:54.986850   22538 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:54.986854   22538 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:54.986857   22538 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:54.986860   22538 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:54.986863   22538 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:54.986866   22538 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:54.986869   22538 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:54.986871   22538 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:54.986874   22538 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:54.986879   22538 cri.go:89] found id: ""
	I1206 08:30:54.986915   22538 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:55.000552   22538 out.go:203] 
	W1206 08:30:55.001847   22538 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:55.001873   22538 out.go:285] * 
	* 
	W1206 08:30:55.005083   22538 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:55.006417   22538 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.686889ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-765040 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-765040 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4da70611-54f8-43cc-99f4-18fd179af769] Pending
helpers_test.go:352: "task-pv-pod" [4da70611-54f8-43cc-99f4-18fd179af769] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
2025/12/06 08:30:39 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:352: "task-pv-pod" [4da70611-54f8-43cc-99f4-18fd179af769] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003373336s
addons_test.go:572: (dbg) Run:  kubectl --context addons-765040 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-765040 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-765040 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-765040 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-765040 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-765040 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-765040 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [fc882b36-6341-455a-bf2a-0d70ce42ac3b] Pending
helpers_test.go:352: "task-pv-pod-restore" [fc882b36-6341-455a-bf2a-0d70ce42ac3b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [fc882b36-6341-455a-bf2a-0d70ce42ac3b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004460029s
addons_test.go:614: (dbg) Run:  kubectl --context addons-765040 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-765040 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-765040 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (238.327563ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:31:03.614958   22837 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:31:03.615123   22837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:31:03.615132   22837 out.go:374] Setting ErrFile to fd 2...
	I1206 08:31:03.615136   22837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:31:03.615323   22837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:31:03.615570   22837 mustload.go:66] Loading cluster: addons-765040
	I1206 08:31:03.615871   22837 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:31:03.615888   22837 addons.go:622] checking whether the cluster is paused
	I1206 08:31:03.615975   22837 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:31:03.616040   22837 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:31:03.616403   22837 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:31:03.635886   22837 ssh_runner.go:195] Run: systemctl --version
	I1206 08:31:03.635935   22837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:31:03.654880   22837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:31:03.746752   22837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:31:03.746838   22837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:31:03.775414   22837 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:31:03.775434   22837 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:31:03.775438   22837 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:31:03.775441   22837 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:31:03.775444   22837 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:31:03.775448   22837 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:31:03.775451   22837 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:31:03.775454   22837 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:31:03.775456   22837 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:31:03.775461   22837 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:31:03.775464   22837 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:31:03.775466   22837 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:31:03.775477   22837 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:31:03.775480   22837 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:31:03.775483   22837 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:31:03.775487   22837 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:31:03.775490   22837 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:31:03.775493   22837 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:31:03.775496   22837 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:31:03.775499   22837 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:31:03.775503   22837 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:31:03.775506   22837 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:31:03.775508   22837 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:31:03.775511   22837 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:31:03.775513   22837 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:31:03.775516   22837 cri.go:89] found id: ""
	I1206 08:31:03.775560   22837 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:31:03.789108   22837 out.go:203] 
	W1206 08:31:03.790373   22837 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:31:03.790389   22837 out.go:285] * 
	* 
	W1206 08:31:03.793373   22837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:31:03.794880   22837 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (235.52281ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:31:03.854304   22899 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:31:03.854599   22899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:31:03.854609   22899 out.go:374] Setting ErrFile to fd 2...
	I1206 08:31:03.854613   22899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:31:03.854828   22899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:31:03.855131   22899 mustload.go:66] Loading cluster: addons-765040
	I1206 08:31:03.855476   22899 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:31:03.855497   22899 addons.go:622] checking whether the cluster is paused
	I1206 08:31:03.855580   22899 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:31:03.855596   22899 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:31:03.856046   22899 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:31:03.873817   22899 ssh_runner.go:195] Run: systemctl --version
	I1206 08:31:03.873874   22899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:31:03.890850   22899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:31:03.982918   22899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:31:03.983025   22899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:31:04.010909   22899 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:31:04.010928   22899 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:31:04.010932   22899 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:31:04.010935   22899 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:31:04.010939   22899 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:31:04.010943   22899 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:31:04.010946   22899 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:31:04.010948   22899 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:31:04.010951   22899 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:31:04.010957   22899 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:31:04.010960   22899 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:31:04.010963   22899 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:31:04.010979   22899 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:31:04.010982   22899 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:31:04.011004   22899 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:31:04.011016   22899 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:31:04.011020   22899 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:31:04.011025   22899 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:31:04.011028   22899 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:31:04.011030   22899 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:31:04.011033   22899 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:31:04.011036   22899 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:31:04.011038   22899 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:31:04.011041   22899 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:31:04.011043   22899 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:31:04.011046   22899 cri.go:89] found id: ""
	I1206 08:31:04.011084   22899 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:31:04.025590   22899 out.go:203] 
	W1206 08:31:04.026896   22899 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:31:04.026914   22899 out.go:285] * 
	* 
	W1206 08:31:04.029880   22899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:31:04.031114   22899 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-765040 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-765040 --alsologtostderr -v=1: exit status 11 (239.147321ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:47.182630   21501 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:47.182940   21501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:47.182951   21501 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:47.182956   21501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:47.183203   21501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:47.183510   21501 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:47.183948   21501 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:47.183972   21501 addons.go:622] checking whether the cluster is paused
	I1206 08:30:47.184092   21501 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:47.184111   21501 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:47.184485   21501 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:47.202395   21501 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:47.202472   21501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:47.220378   21501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:47.313545   21501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:47.313644   21501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:47.342540   21501 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:30:47.342559   21501 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:47.342563   21501 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:47.342567   21501 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:47.342570   21501 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:47.342583   21501 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:47.342603   21501 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:47.342606   21501 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:47.342609   21501 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:47.342615   21501 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:47.342618   21501 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:47.342621   21501 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:47.342624   21501 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:47.342627   21501 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:47.342630   21501 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:47.342642   21501 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:47.342650   21501 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:47.342655   21501 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:47.342658   21501 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:47.342660   21501 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:47.342663   21501 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:47.342666   21501 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:47.342668   21501 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:47.342671   21501 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:47.342674   21501 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:47.342677   21501 cri.go:89] found id: ""
	I1206 08:30:47.342710   21501 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:47.357075   21501 out.go:203] 
	W1206 08:30:47.358319   21501 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:47.358341   21501 out.go:285] * 
	* 
	W1206 08:30:47.361291   21501 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:47.362630   21501 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-765040 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-765040
helpers_test.go:243: (dbg) docker inspect addons-765040:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f",
	        "Created": "2025-12-06T08:28:37.206934469Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:28:37.246417696Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/hosts",
	        "LogPath": "/var/lib/docker/containers/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f/e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f-json.log",
	        "Name": "/addons-765040",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-765040:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-765040",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6ebd39802c205fe27ba11e232ea8542b5f4368a211059fdad28d3c4c26aa86f",
	                "LowerDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9cec2f81adbef1e2e38e29523745d1dc4e6d5c4ef993f319e48dd7a30e241dfd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-765040",
	                "Source": "/var/lib/docker/volumes/addons-765040/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-765040",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-765040",
	                "name.minikube.sigs.k8s.io": "addons-765040",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "149907238c9b06fe904b5ec6d924983f773136a3bcf194f7f91647f015ecb15f",
	            "SandboxKey": "/var/run/docker/netns/149907238c9b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-765040": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fc234aa0001004b99f75640e5a6f610b5693a87a6c2ea28dadc06a580b327e0",
	                    "EndpointID": "cc27eb7806ba1e0b0f14efd409f935e0ccfec14d25cf37e14cabc84e9d21dc92",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "e2:83:ea:3c:05:06",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-765040",
	                        "e6ebd39802c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-765040 -n addons-765040
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-765040 logs -n 25: (1.159962746s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-815139                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-815139   │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-319272                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-319272   │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-291174                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-291174   │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-815139                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-815139   │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ --download-only -p download-docker-857088 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-857088 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ -p download-docker-857088                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-857088 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ --download-only -p binary-mirror-791651 --alsologtostderr --binary-mirror http://127.0.0.1:42485 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-791651   │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-791651                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-791651   │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ addons  │ enable dashboard -p addons-765040                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-765040                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ start   │ -p addons-765040 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-765040                                                                                                                                                                                                                                                                                                                                                                                           │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ ip      │ addons-765040 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ ssh     │ addons-765040 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ ssh     │ addons-765040 ssh cat /opt/local-path-provisioner/pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │ 06 Dec 25 08:30 UTC │
	│ addons  │ addons-765040 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ addons-765040 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	│ addons  │ enable headlamp -p addons-765040 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-765040          │ jenkins │ v1.37.0 │ 06 Dec 25 08:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:13.517551   10917 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:13.517870   10917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:13.517884   10917 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:13.517888   10917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:13.518119   10917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:28:13.518632   10917 out.go:368] Setting JSON to false
	I1206 08:28:13.519436   10917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":645,"bootTime":1765009049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:13.519491   10917 start.go:143] virtualization: kvm guest
	I1206 08:28:13.521363   10917 out.go:179] * [addons-765040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:28:13.522876   10917 notify.go:221] Checking for updates...
	I1206 08:28:13.522890   10917 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:28:13.524088   10917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:13.525293   10917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:28:13.526449   10917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:28:13.527747   10917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:28:13.529002   10917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:28:13.530460   10917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:13.552868   10917 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:28:13.552951   10917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:13.605014   10917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 08:28:13.596028874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:13.605111   10917 docker.go:319] overlay module found
	I1206 08:28:13.606844   10917 out.go:179] * Using the docker driver based on user configuration
	I1206 08:28:13.608017   10917 start.go:309] selected driver: docker
	I1206 08:28:13.608034   10917 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:13.608045   10917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:28:13.608589   10917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:13.659285   10917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 08:28:13.65040943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:13.659460   10917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:13.659756   10917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:28:13.661710   10917 out.go:179] * Using Docker driver with root privileges
	I1206 08:28:13.662981   10917 cni.go:84] Creating CNI manager for ""
	I1206 08:28:13.663069   10917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 08:28:13.663085   10917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 08:28:13.663167   10917 start.go:353] cluster config:
	{Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 08:28:13.664560   10917 out.go:179] * Starting "addons-765040" primary control-plane node in "addons-765040" cluster
	I1206 08:28:13.665892   10917 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 08:28:13.667139   10917 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:28:13.668333   10917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:13.668372   10917 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 08:28:13.668378   10917 cache.go:65] Caching tarball of preloaded images
	I1206 08:28:13.668432   10917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:28:13.668451   10917 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 08:28:13.668475   10917 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 08:28:13.668799   10917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/config.json ...
	I1206 08:28:13.668820   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/config.json: {Name:mkc4940cc63cbd4e42707a0b9fa12c640aed83ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:13.685770   10917 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 08:28:13.685897   10917 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 08:28:13.685917   10917 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 08:28:13.685923   10917 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 08:28:13.685933   10917 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 08:28:13.685943   10917 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 08:28:26.872277   10917 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 08:28:26.872331   10917 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:28:26.872376   10917 start.go:360] acquireMachinesLock for addons-765040: {Name:mk815f37680f889a77215d594e93dfa4e4ffc3d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:28:26.872483   10917 start.go:364] duration metric: took 84.449µs to acquireMachinesLock for "addons-765040"
	I1206 08:28:26.872513   10917 start.go:93] Provisioning new machine with config: &{Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 08:28:26.872585   10917 start.go:125] createHost starting for "" (driver="docker")
	I1206 08:28:26.875082   10917 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 08:28:26.875303   10917 start.go:159] libmachine.API.Create for "addons-765040" (driver="docker")
	I1206 08:28:26.875336   10917 client.go:173] LocalClient.Create starting
	I1206 08:28:26.875447   10917 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 08:28:26.946406   10917 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 08:28:27.114294   10917 cli_runner.go:164] Run: docker network inspect addons-765040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 08:28:27.132200   10917 cli_runner.go:211] docker network inspect addons-765040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 08:28:27.132264   10917 network_create.go:284] running [docker network inspect addons-765040] to gather additional debugging logs...
	I1206 08:28:27.132277   10917 cli_runner.go:164] Run: docker network inspect addons-765040
	W1206 08:28:27.147367   10917 cli_runner.go:211] docker network inspect addons-765040 returned with exit code 1
	I1206 08:28:27.147395   10917 network_create.go:287] error running [docker network inspect addons-765040]: docker network inspect addons-765040: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-765040 not found
	I1206 08:28:27.147421   10917 network_create.go:289] output of [docker network inspect addons-765040]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-765040 not found
	
	** /stderr **
	I1206 08:28:27.147512   10917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:28:27.164356   10917 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c468b0}
	I1206 08:28:27.164386   10917 network_create.go:124] attempt to create docker network addons-765040 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 08:28:27.164435   10917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-765040 addons-765040
	I1206 08:28:27.209037   10917 network_create.go:108] docker network addons-765040 192.168.49.0/24 created
	I1206 08:28:27.209088   10917 kic.go:121] calculated static IP "192.168.49.2" for the "addons-765040" container
	I1206 08:28:27.209152   10917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 08:28:27.225039   10917 cli_runner.go:164] Run: docker volume create addons-765040 --label name.minikube.sigs.k8s.io=addons-765040 --label created_by.minikube.sigs.k8s.io=true
	I1206 08:28:27.241947   10917 oci.go:103] Successfully created a docker volume addons-765040
	I1206 08:28:27.242043   10917 cli_runner.go:164] Run: docker run --rm --name addons-765040-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-765040 --entrypoint /usr/bin/test -v addons-765040:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 08:28:33.349737   10917 cli_runner.go:217] Completed: docker run --rm --name addons-765040-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-765040 --entrypoint /usr/bin/test -v addons-765040:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (6.107650132s)
	I1206 08:28:33.349763   10917 oci.go:107] Successfully prepared a docker volume addons-765040
	I1206 08:28:33.349817   10917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:33.349828   10917 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 08:28:33.349876   10917 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-765040:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 08:28:37.139539   10917 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-765040:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.789608888s)
	I1206 08:28:37.139585   10917 kic.go:203] duration metric: took 3.78975333s to extract preloaded images to volume ...
	W1206 08:28:37.139675   10917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 08:28:37.139717   10917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 08:28:37.139755   10917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 08:28:37.191680   10917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-765040 --name addons-765040 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-765040 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-765040 --network addons-765040 --ip 192.168.49.2 --volume addons-765040:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 08:28:37.483915   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Running}}
	I1206 08:28:37.502886   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:37.521541   10917 cli_runner.go:164] Run: docker exec addons-765040 stat /var/lib/dpkg/alternatives/iptables
	I1206 08:28:37.573855   10917 oci.go:144] the created container "addons-765040" has a running status.
	I1206 08:28:37.573883   10917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa...
	I1206 08:28:37.666669   10917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 08:28:37.691472   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:37.714332   10917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 08:28:37.714359   10917 kic_runner.go:114] Args: [docker exec --privileged addons-765040 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 08:28:37.757534   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:37.782718   10917 machine.go:94] provisionDockerMachine start ...
	I1206 08:28:37.782849   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:37.805148   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:37.805980   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:37.806020   10917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:28:37.940896   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-765040
	
	I1206 08:28:37.940926   10917 ubuntu.go:182] provisioning hostname "addons-765040"
	I1206 08:28:37.941003   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:37.961076   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:37.961400   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:37.961425   10917 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-765040 && echo "addons-765040" | sudo tee /etc/hostname
	I1206 08:28:38.098462   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-765040
	
	I1206 08:28:38.098538   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.118617   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:38.118855   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:38.118881   10917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-765040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-765040/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-765040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:28:38.245402   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:28:38.245433   10917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 08:28:38.245451   10917 ubuntu.go:190] setting up certificates
	I1206 08:28:38.245459   10917 provision.go:84] configureAuth start
	I1206 08:28:38.245503   10917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-765040
	I1206 08:28:38.262175   10917 provision.go:143] copyHostCerts
	I1206 08:28:38.262242   10917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 08:28:38.262368   10917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 08:28:38.262444   10917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 08:28:38.262546   10917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.addons-765040 san=[127.0.0.1 192.168.49.2 addons-765040 localhost minikube]
	I1206 08:28:38.279887   10917 provision.go:177] copyRemoteCerts
	I1206 08:28:38.279929   10917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:28:38.279957   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.296426   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:38.389028   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 08:28:38.407537   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 08:28:38.424272   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 08:28:38.440577   10917 provision.go:87] duration metric: took 195.106008ms to configureAuth
	I1206 08:28:38.440605   10917 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:28:38.440811   10917 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:28:38.440913   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.459664   10917 main.go:143] libmachine: Using SSH client type: native
	I1206 08:28:38.459885   10917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1206 08:28:38.459905   10917 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 08:28:38.720520   10917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 08:28:38.720548   10917 machine.go:97] duration metric: took 937.801035ms to provisionDockerMachine
	I1206 08:28:38.720559   10917 client.go:176] duration metric: took 11.845205717s to LocalClient.Create
	I1206 08:28:38.720579   10917 start.go:167] duration metric: took 11.845275252s to libmachine.API.Create "addons-765040"
	I1206 08:28:38.720589   10917 start.go:293] postStartSetup for "addons-765040" (driver="docker")
	I1206 08:28:38.720602   10917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:28:38.720664   10917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:28:38.720720   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.738076   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:38.831377   10917 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:28:38.834534   10917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:28:38.834560   10917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:28:38.834574   10917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 08:28:38.834628   10917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 08:28:38.834653   10917 start.go:296] duration metric: took 114.057967ms for postStartSetup
	I1206 08:28:38.834949   10917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-765040
	I1206 08:28:38.851945   10917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/config.json ...
	I1206 08:28:38.852223   10917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:28:38.852274   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.871235   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:38.959882   10917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:28:38.964209   10917 start.go:128] duration metric: took 12.091610543s to createHost
	I1206 08:28:38.964234   10917 start.go:83] releasing machines lock for "addons-765040", held for 12.091737561s
	I1206 08:28:38.964293   10917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-765040
	I1206 08:28:38.981586   10917 ssh_runner.go:195] Run: cat /version.json
	I1206 08:28:38.981669   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:38.981766   10917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:28:38.981838   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:39.001373   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:39.001393   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:39.145883   10917 ssh_runner.go:195] Run: systemctl --version
	I1206 08:28:39.152038   10917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 08:28:39.184647   10917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:28:39.189152   10917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:28:39.189220   10917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:28:39.213372   10917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 08:28:39.213398   10917 start.go:496] detecting cgroup driver to use...
	I1206 08:28:39.213429   10917 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 08:28:39.213475   10917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 08:28:39.228363   10917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 08:28:39.239867   10917 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:28:39.239923   10917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:28:39.255437   10917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:28:39.271764   10917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:28:39.353206   10917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:28:39.433938   10917 docker.go:234] disabling docker service ...
	I1206 08:28:39.434013   10917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:28:39.451394   10917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:28:39.463652   10917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:28:39.547471   10917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:28:39.623956   10917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:28:39.636124   10917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:28:39.649762   10917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 08:28:39.649817   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.660041   10917 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 08:28:39.660102   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.668996   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.677331   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.686027   10917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:28:39.693878   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.702442   10917 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.716354   10917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:28:39.725208   10917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:28:39.732625   10917 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 08:28:39.732686   10917 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 08:28:39.744723   10917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:28:39.752658   10917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:28:39.828807   10917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 08:28:39.965608   10917 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 08:28:39.965669   10917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 08:28:39.969462   10917 start.go:564] Will wait 60s for crictl version
	I1206 08:28:39.969517   10917 ssh_runner.go:195] Run: which crictl
	I1206 08:28:39.972887   10917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:28:39.996063   10917 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 08:28:39.996173   10917 ssh_runner.go:195] Run: crio --version
	I1206 08:28:40.023126   10917 ssh_runner.go:195] Run: crio --version
	I1206 08:28:40.051214   10917 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 08:28:40.052722   10917 cli_runner.go:164] Run: docker network inspect addons-765040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:28:40.069994   10917 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:28:40.074025   10917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:28:40.083977   10917 kubeadm.go:884] updating cluster {Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:28:40.084123   10917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:40.084173   10917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:28:40.114788   10917 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 08:28:40.114807   10917 crio.go:433] Images already preloaded, skipping extraction
	I1206 08:28:40.114849   10917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:28:40.138740   10917 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 08:28:40.138761   10917 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:28:40.138769   10917 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 08:28:40.138856   10917 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-765040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:28:40.138920   10917 ssh_runner.go:195] Run: crio config
	I1206 08:28:40.183335   10917 cni.go:84] Creating CNI manager for ""
	I1206 08:28:40.183355   10917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 08:28:40.183367   10917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:28:40.183391   10917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-765040 NodeName:addons-765040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:28:40.183515   10917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-765040"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:28:40.183574   10917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 08:28:40.191272   10917 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:28:40.191339   10917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:28:40.198455   10917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 08:28:40.210144   10917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 08:28:40.224387   10917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 08:28:40.236760   10917 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:28:40.240230   10917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:28:40.249788   10917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:28:40.323463   10917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:28:40.346323   10917 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040 for IP: 192.168.49.2
	I1206 08:28:40.346350   10917 certs.go:195] generating shared ca certs ...
	I1206 08:28:40.346377   10917 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.346498   10917 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 08:28:40.437423   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt ...
	I1206 08:28:40.437452   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt: {Name:mk787430aa62b15e4c09755ea69ecf9fe7fa9f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.437627   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key ...
	I1206 08:28:40.437638   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key: {Name:mk563f3855d73e541816d90ff60f762f79826240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.437712   10917 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 08:28:40.556932   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt ...
	I1206 08:28:40.556962   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt: {Name:mk91c1d7726b80ca7113f5af7ecec813b675696a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.557156   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key ...
	I1206 08:28:40.557169   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key: {Name:mk8d3b98839a40feddb9b7b002317adb40731e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.557243   10917 certs.go:257] generating profile certs ...
	I1206 08:28:40.557295   10917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.key
	I1206 08:28:40.557309   10917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt with IP's: []
	I1206 08:28:40.683441   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt ...
	I1206 08:28:40.683487   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: {Name:mk7fcad273551a9b3aa2bddec0275a506cba529c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.683652   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.key ...
	I1206 08:28:40.683663   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.key: {Name:mk887fa76bbca443414283d235432f7d8d352866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.683734   10917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716
	I1206 08:28:40.683753   10917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 08:28:40.857859   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716 ...
	I1206 08:28:40.857891   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716: {Name:mk3f9ea382a0ae431eb357f49d155fb1f62ef1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.858071   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716 ...
	I1206 08:28:40.858085   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716: {Name:mk52e10fc020597e20b09c5a443cf291499ee32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.858157   10917 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt.e8265716 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt
	I1206 08:28:40.858237   10917 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key.e8265716 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key
	I1206 08:28:40.858286   10917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key
	I1206 08:28:40.858303   10917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt with IP's: []
	I1206 08:28:40.962968   10917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt ...
	I1206 08:28:40.963007   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt: {Name:mk19789c186eed481733390928a022e0cbad9d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.963218   10917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key ...
	I1206 08:28:40.963239   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key: {Name:mk56dadf43ac756d308eaa62cfea0ebe0d85fc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:40.963450   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:28:40.963528   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 08:28:40.963561   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:28:40.963588   10917 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 08:28:40.964102   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:28:40.981812   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 08:28:40.998535   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:28:41.015644   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:28:41.033071   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 08:28:41.049921   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 08:28:41.067501   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:28:41.084820   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:28:41.101841   10917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:28:41.120282   10917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:28:41.132295   10917 ssh_runner.go:195] Run: openssl version
	I1206 08:28:41.138150   10917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.145197   10917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:28:41.154739   10917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.158338   10917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.158385   10917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:28:41.193034   10917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:28:41.200588   10917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 08:28:41.208251   10917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:28:41.211716   10917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 08:28:41.211770   10917 kubeadm.go:401] StartCluster: {Name:addons-765040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-765040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:28:41.211856   10917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:28:41.211917   10917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:28:41.238074   10917 cri.go:89] found id: ""
	I1206 08:28:41.238146   10917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:28:41.246144   10917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:28:41.253646   10917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:28:41.253697   10917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:28:41.261311   10917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:28:41.261327   10917 kubeadm.go:158] found existing configuration files:
	
	I1206 08:28:41.261372   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 08:28:41.268604   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:28:41.268664   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:28:41.275273   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 08:28:41.282344   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:28:41.282398   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:28:41.289352   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 08:28:41.296566   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:28:41.296626   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:28:41.303534   10917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 08:28:41.310584   10917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:28:41.310628   10917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:28:41.317435   10917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:28:41.352193   10917 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 08:28:41.352260   10917 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:28:41.371285   10917 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:28:41.371350   10917 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 08:28:41.371419   10917 kubeadm.go:319] OS: Linux
	I1206 08:28:41.371510   10917 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:28:41.371573   10917 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:28:41.371623   10917 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:28:41.371673   10917 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:28:41.371712   10917 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:28:41.371755   10917 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:28:41.371794   10917 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:28:41.371835   10917 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 08:28:41.424546   10917 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:28:41.424642   10917 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:28:41.424758   10917 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:28:41.430850   10917 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:28:41.432830   10917 out.go:252]   - Generating certificates and keys ...
	I1206 08:28:41.432900   10917 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:28:41.433009   10917 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:28:41.559848   10917 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 08:28:41.831564   10917 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 08:28:42.040592   10917 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 08:28:42.253502   10917 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 08:28:42.444686   10917 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 08:28:42.444839   10917 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-765040 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 08:28:42.580933   10917 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 08:28:42.581082   10917 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-765040 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 08:28:42.971101   10917 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 08:28:43.073281   10917 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 08:28:43.389764   10917 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 08:28:43.389850   10917 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:28:43.827808   10917 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:28:44.354475   10917 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:28:44.893741   10917 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:28:45.069716   10917 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:28:45.375477   10917 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:28:45.375830   10917 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:28:45.379453   10917 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:28:45.381820   10917 out.go:252]   - Booting up control plane ...
	I1206 08:28:45.381952   10917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:28:45.382091   10917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:28:45.382886   10917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:28:45.396073   10917 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:28:45.396238   10917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:28:45.402491   10917 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:28:45.402775   10917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:28:45.402837   10917 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:28:45.500380   10917 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:28:45.500541   10917 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 08:28:46.002336   10917 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915255ms
	I1206 08:28:46.005100   10917 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 08:28:46.005230   10917 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 08:28:46.005322   10917 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 08:28:46.005401   10917 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 08:28:47.742014   10917 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.736809794s
	I1206 08:28:48.107478   10917 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.102317882s
	I1206 08:28:50.007191   10917 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002039342s
	I1206 08:28:50.023487   10917 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 08:28:50.033655   10917 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 08:28:50.041965   10917 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 08:28:50.042206   10917 kubeadm.go:319] [mark-control-plane] Marking the node addons-765040 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 08:28:50.049790   10917 kubeadm.go:319] [bootstrap-token] Using token: 0jkuew.7iwu0edepru23801
	I1206 08:28:50.051160   10917 out.go:252]   - Configuring RBAC rules ...
	I1206 08:28:50.051320   10917 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 08:28:50.055239   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 08:28:50.060157   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 08:28:50.062402   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 08:28:50.064852   10917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 08:28:50.067185   10917 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 08:28:50.411875   10917 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 08:28:50.826152   10917 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 08:28:51.413496   10917 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 08:28:51.414622   10917 kubeadm.go:319] 
	I1206 08:28:51.414729   10917 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 08:28:51.414739   10917 kubeadm.go:319] 
	I1206 08:28:51.414800   10917 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 08:28:51.414806   10917 kubeadm.go:319] 
	I1206 08:28:51.414826   10917 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 08:28:51.414876   10917 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 08:28:51.414918   10917 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 08:28:51.414924   10917 kubeadm.go:319] 
	I1206 08:28:51.415047   10917 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 08:28:51.415066   10917 kubeadm.go:319] 
	I1206 08:28:51.415144   10917 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 08:28:51.415162   10917 kubeadm.go:319] 
	I1206 08:28:51.415215   10917 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 08:28:51.415315   10917 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 08:28:51.415421   10917 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 08:28:51.415434   10917 kubeadm.go:319] 
	I1206 08:28:51.415551   10917 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 08:28:51.415643   10917 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 08:28:51.415655   10917 kubeadm.go:319] 
	I1206 08:28:51.415765   10917 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0jkuew.7iwu0edepru23801 \
	I1206 08:28:51.415909   10917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 08:28:51.415942   10917 kubeadm.go:319] 	--control-plane 
	I1206 08:28:51.415957   10917 kubeadm.go:319] 
	I1206 08:28:51.416096   10917 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 08:28:51.416109   10917 kubeadm.go:319] 
	I1206 08:28:51.416207   10917 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0jkuew.7iwu0edepru23801 \
	I1206 08:28:51.416381   10917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 08:28:51.417512   10917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 08:28:51.417633   10917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 08:28:51.417648   10917 cni.go:84] Creating CNI manager for ""
	I1206 08:28:51.417658   10917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 08:28:51.420097   10917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 08:28:51.421571   10917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 08:28:51.425641   10917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 08:28:51.425658   10917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 08:28:51.438402   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 08:28:51.639221   10917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 08:28:51.639278   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:51.639282   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-765040 minikube.k8s.io/updated_at=2025_12_06T08_28_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=addons-765040 minikube.k8s.io/primary=true
	I1206 08:28:51.648303   10917 ops.go:34] apiserver oom_adj: -16
	I1206 08:28:51.707710   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:52.208495   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:52.708012   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:53.208748   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:53.708688   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:54.208542   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:54.707772   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:55.208144   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:55.708348   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:56.208329   10917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:28:56.272404   10917 kubeadm.go:1114] duration metric: took 4.633179973s to wait for elevateKubeSystemPrivileges
	I1206 08:28:56.272442   10917 kubeadm.go:403] duration metric: took 15.060678031s to StartCluster
	I1206 08:28:56.272462   10917 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:56.272580   10917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:28:56.272945   10917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:56.273177   10917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 08:28:56.273198   10917 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 08:28:56.273275   10917 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 08:28:56.273398   10917 addons.go:70] Setting yakd=true in profile "addons-765040"
	I1206 08:28:56.273407   10917 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:28:56.273419   10917 addons.go:239] Setting addon yakd=true in "addons-765040"
	I1206 08:28:56.273420   10917 addons.go:70] Setting registry-creds=true in profile "addons-765040"
	I1206 08:28:56.273433   10917 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-765040"
	I1206 08:28:56.273413   10917 addons.go:70] Setting ingress-dns=true in profile "addons-765040"
	I1206 08:28:56.273456   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273463   10917 addons.go:70] Setting inspektor-gadget=true in profile "addons-765040"
	I1206 08:28:56.273467   10917 addons.go:239] Setting addon ingress-dns=true in "addons-765040"
	I1206 08:28:56.273475   10917 addons.go:239] Setting addon inspektor-gadget=true in "addons-765040"
	I1206 08:28:56.273478   10917 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-765040"
	I1206 08:28:56.273496   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273501   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273511   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273437   10917 addons.go:239] Setting addon registry-creds=true in "addons-765040"
	I1206 08:28:56.273628   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273635   10917 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-765040"
	I1206 08:28:56.273655   10917 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-765040"
	I1206 08:28:56.273677   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274030   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274034   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274054   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274090   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274138   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274314   10917 addons.go:70] Setting storage-provisioner=true in profile "addons-765040"
	I1206 08:28:56.274615   10917 addons.go:239] Setting addon storage-provisioner=true in "addons-765040"
	I1206 08:28:56.274306   10917 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-765040"
	I1206 08:28:56.275192   10917 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-765040"
	I1206 08:28:56.275530   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.275932   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.273455   10917 addons.go:70] Setting metrics-server=true in profile "addons-765040"
	I1206 08:28:56.276496   10917 addons.go:239] Setting addon metrics-server=true in "addons-765040"
	I1206 08:28:56.276528   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.276931   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.276972   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.278070   10917 out.go:179] * Verifying Kubernetes components...
	I1206 08:28:56.274340   10917 addons.go:70] Setting default-storageclass=true in profile "addons-765040"
	I1206 08:28:56.278395   10917 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-765040"
	I1206 08:28:56.278813   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274349   10917 addons.go:70] Setting cloud-spanner=true in profile "addons-765040"
	I1206 08:28:56.279316   10917 addons.go:239] Setting addon cloud-spanner=true in "addons-765040"
	I1206 08:28:56.279359   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274357   10917 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-765040"
	I1206 08:28:56.279591   10917 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-765040"
	I1206 08:28:56.279624   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.279846   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.280196   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274368   10917 addons.go:70] Setting registry=true in profile "addons-765040"
	I1206 08:28:56.280432   10917 addons.go:239] Setting addon registry=true in "addons-765040"
	I1206 08:28:56.280463   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274379   10917 addons.go:70] Setting gcp-auth=true in profile "addons-765040"
	I1206 08:28:56.282114   10917 mustload.go:66] Loading cluster: addons-765040
	I1206 08:28:56.274389   10917 addons.go:70] Setting volcano=true in profile "addons-765040"
	I1206 08:28:56.282301   10917 addons.go:239] Setting addon volcano=true in "addons-765040"
	I1206 08:28:56.282331   10917 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:28:56.282334   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.282604   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.282822   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.274411   10917 addons.go:70] Setting ingress=true in profile "addons-765040"
	I1206 08:28:56.282901   10917 addons.go:239] Setting addon ingress=true in "addons-765040"
	I1206 08:28:56.282937   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.274875   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.285164   10917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:28:56.274399   10917 addons.go:70] Setting volumesnapshots=true in profile "addons-765040"
	I1206 08:28:56.285420   10917 addons.go:239] Setting addon volumesnapshots=true in "addons-765040"
	I1206 08:28:56.285450   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.285911   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.288727   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.292909   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.316317   10917 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 08:28:56.320272   10917 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 08:28:56.320301   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 08:28:56.320364   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.334657   10917 addons.go:239] Setting addon default-storageclass=true in "addons-765040"
	I1206 08:28:56.334706   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.335236   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.338230   10917 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1206 08:28:56.343845   10917 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 08:28:56.343869   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 08:28:56.343929   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.355053   10917 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-765040"
	I1206 08:28:56.359739   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.360960   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:28:56.364178   10917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:28:56.365145   10917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:28:56.365166   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:28:56.365225   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.370526   10917 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 08:28:56.371963   10917 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 08:28:56.372087   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 08:28:56.372178   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.374693   10917 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 08:28:56.377536   10917 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 08:28:56.377557   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 08:28:56.377618   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.394564   10917 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 08:28:56.395474   10917 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 08:28:56.397169   10917 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 08:28:56.397187   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 08:28:56.397275   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.397763   10917 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 08:28:56.398833   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:28:56.399089   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 08:28:56.399524   10917 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 08:28:56.399376   10917 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 08:28:56.400136   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 08:28:56.400620   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.400956   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 08:28:56.400972   10917 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 08:28:56.400291   10917 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 08:28:56.401129   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.402353   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:28:56.402503   10917 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 08:28:56.402515   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 08:28:56.403242   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.404083   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 08:28:56.405786   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1206 08:28:56.405903   10917 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	W1206 08:28:56.407282   10917 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 08:28:56.407399   10917 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 08:28:56.407414   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 08:28:56.407510   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.407715   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 08:28:56.407824   10917 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 08:28:56.407834   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 08:28:56.407899   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.410457   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 08:28:56.411683   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 08:28:56.412878   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 08:28:56.413954   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 08:28:56.417157   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 08:28:56.418595   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 08:28:56.418617   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 08:28:56.418687   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.420058   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.424498   10917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 08:28:56.426584   10917 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 08:28:56.427920   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 08:28:56.427951   10917 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 08:28:56.428093   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.442077   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:28:56.445913   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.452814   10917 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:28:56.452838   10917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:28:56.452895   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.457214   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.465423   10917 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 08:28:56.466817   10917 out.go:179]   - Using image docker.io/busybox:stable
	I1206 08:28:56.468222   10917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 08:28:56.468241   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 08:28:56.468307   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:28:56.477186   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.485922   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.487281   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.487292   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.490425   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.491671   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.492643   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.503156   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	W1206 08:28:56.509424   10917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 08:28:56.509459   10917 retry.go:31] will retry after 257.593068ms: ssh: handshake failed: EOF
	I1206 08:28:56.509571   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.511577   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:28:56.513408   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	W1206 08:28:56.520705   10917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 08:28:56.520793   10917 retry.go:31] will retry after 260.871931ms: ssh: handshake failed: EOF
	I1206 08:28:56.525617   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	W1206 08:28:56.529191   10917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 08:28:56.529224   10917 retry.go:31] will retry after 148.947098ms: ssh: handshake failed: EOF
	I1206 08:28:56.529609   10917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:28:56.610415   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 08:28:56.621269   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 08:28:56.623293   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 08:28:56.641967   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 08:28:56.654251   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 08:28:56.656851   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 08:28:56.656882   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 08:28:56.669895   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:28:56.677136   10917 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 08:28:56.677168   10917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 08:28:56.682536   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 08:28:56.682561   10917 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 08:28:56.687568   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 08:28:56.691481   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 08:28:56.710855   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 08:28:56.710882   10917 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 08:28:56.719525   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 08:28:56.719621   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 08:28:56.738226   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 08:28:56.738253   10917 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 08:28:56.741466   10917 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 08:28:56.741554   10917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 08:28:56.742128   10917 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 08:28:56.744001   10917 node_ready.go:35] waiting up to 6m0s for node "addons-765040" to be "Ready" ...
	I1206 08:28:56.750307   10917 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 08:28:56.750473   10917 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 08:28:56.788256   10917 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 08:28:56.788298   10917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 08:28:56.794220   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 08:28:56.797255   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 08:28:56.797281   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 08:28:56.811211   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 08:28:56.811297   10917 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 08:28:56.844903   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 08:28:56.844929   10917 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 08:28:56.859713   10917 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 08:28:56.859797   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 08:28:56.860343   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 08:28:56.860420   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 08:28:56.901691   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 08:28:56.918777   10917 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:28:56.918869   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 08:28:56.922982   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 08:28:56.946527   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 08:28:56.946631   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 08:28:56.969893   10917 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 08:28:56.970018   10917 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 08:28:56.983427   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:28:56.991271   10917 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 08:28:56.991383   10917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 08:28:57.003608   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:28:57.031681   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 08:28:57.031706   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 08:28:57.043957   10917 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 08:28:57.043980   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 08:28:57.082834   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 08:28:57.087714   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 08:28:57.087741   10917 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 08:28:57.138773   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 08:28:57.138863   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 08:28:57.189412   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 08:28:57.189434   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 08:28:57.258349   10917 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 08:28:57.258377   10917 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 08:28:57.271379   10917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-765040" context rescaled to 1 replicas
	I1206 08:28:57.307387   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 08:28:57.799575   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.108057406s)
	I1206 08:28:57.799613   10917 addons.go:495] Verifying addon ingress=true in "addons-765040"
	I1206 08:28:57.799670   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005410428s)
	I1206 08:28:57.799811   10917 addons.go:495] Verifying addon metrics-server=true in "addons-765040"
	I1206 08:28:57.801348   10917 out.go:179] * Verifying ingress addon...
	I1206 08:28:57.801371   10917 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-765040 service yakd-dashboard -n yakd-dashboard
	
	I1206 08:28:57.803516   10917 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 08:28:57.806134   10917 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 08:28:57.806154   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:58.263076   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.279537762s)
	W1206 08:28:58.263133   10917 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 08:28:58.263140   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.259460835s)
	I1206 08:28:58.263160   10917 retry.go:31] will retry after 317.11688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 08:28:58.263189   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.18031332s)
	I1206 08:28:58.263208   10917 addons.go:495] Verifying addon registry=true in "addons-765040"
	I1206 08:28:58.263380   10917 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-765040"
	I1206 08:28:58.265209   10917 out.go:179] * Verifying registry addon...
	I1206 08:28:58.265213   10917 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 08:28:58.268174   10917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 08:28:58.268192   10917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 08:28:58.270610   10917 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 08:28:58.270628   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:58.271556   10917 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 08:28:58.271570   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:58.371711   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:58.580948   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1206 08:28:58.747569   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:28:58.771274   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:58.771360   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:58.806754   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:59.271637   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:59.271647   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:59.372932   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:28:59.771779   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:28:59.771874   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:28:59.806379   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:00.271461   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:00.271513   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:00.306885   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:00.771293   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:00.771323   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:00.806847   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:01.039394   10917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.458397401s)
	W1206 08:29:01.246853   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:01.271533   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:01.271648   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:01.372929   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:01.770940   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:01.771089   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:01.806479   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:02.271665   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:02.271676   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:02.307666   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:02.771497   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:02.771543   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:02.806294   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1206 08:29:03.247042   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:03.270975   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:03.271049   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:03.306300   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:03.771781   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:03.771831   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:03.806503   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:04.048687   10917 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 08:29:04.048751   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:29:04.066713   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:29:04.165647   10917 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 08:29:04.178252   10917 addons.go:239] Setting addon gcp-auth=true in "addons-765040"
	I1206 08:29:04.178301   10917 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:29:04.178666   10917 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:29:04.196450   10917 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 08:29:04.196495   10917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:29:04.214588   10917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:29:04.271593   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:04.271701   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:04.306911   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:04.307267   10917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:29:04.308806   10917 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 08:29:04.310038   10917 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 08:29:04.310050   10917 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 08:29:04.323122   10917 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 08:29:04.323143   10917 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 08:29:04.335638   10917 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 08:29:04.335656   10917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 08:29:04.347834   10917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 08:29:04.639918   10917 addons.go:495] Verifying addon gcp-auth=true in "addons-765040"
	I1206 08:29:04.641513   10917 out.go:179] * Verifying gcp-auth addon...
	I1206 08:29:04.643883   10917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 08:29:04.647266   10917 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 08:29:04.647291   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:04.771121   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:04.771201   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:04.806731   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:05.147697   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:05.247239   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:05.271646   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:05.271680   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:05.307023   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:05.646535   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:05.771207   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:05.771329   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:05.806743   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:06.147221   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:06.271272   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:06.271343   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:06.306708   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:06.647312   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:06.771582   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:06.771671   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:06.807099   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:07.146649   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:07.247412   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:07.270599   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:07.270733   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:07.306095   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:07.646849   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:07.770705   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:07.770738   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:07.806383   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:08.147118   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:08.270626   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:08.270633   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:08.305870   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:08.646634   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:08.771884   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:08.771972   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:08.806503   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:09.147033   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:09.270964   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:09.270983   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:09.306353   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:09.648102   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:09.746279   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:09.771641   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:09.771657   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:09.806167   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:10.146699   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:10.271546   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:10.271623   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:10.307059   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:10.646538   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:10.771556   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:10.771581   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:10.806915   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:11.146810   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:11.270772   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:11.270918   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:11.306188   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:11.646801   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:11.747274   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:11.770862   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:11.770886   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:11.806240   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:12.146952   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:12.270507   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:12.270570   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:12.306952   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:12.646440   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:12.771255   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:12.771276   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:12.806614   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:13.146967   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:13.270870   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:13.270983   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:13.306291   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:13.647303   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:13.770885   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:13.770957   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:13.806461   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:14.147283   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:14.246652   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:14.270983   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:14.271037   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:14.306595   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:14.647483   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:14.771607   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:14.771677   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:14.805849   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:15.146456   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:15.272039   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:15.272472   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:15.306969   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:15.646393   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:15.771186   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:15.771206   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:15.806559   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:16.147336   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:16.271004   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:16.271041   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:16.306557   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:16.646452   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:16.747259   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:16.771667   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:16.771789   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:16.806008   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:17.146453   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:17.271082   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:17.271166   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:17.306497   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:17.647169   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:17.770869   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:17.770873   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:17.806422   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:18.147012   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:18.270886   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:18.270900   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:18.306231   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:18.647068   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:18.770611   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:18.770632   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:18.805969   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:19.147100   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:19.246475   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:19.270646   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:19.270656   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:19.306109   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:19.647063   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:19.771547   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:19.771639   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:19.805945   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:20.146620   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:20.271484   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:20.271492   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:20.306898   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:20.646313   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:20.771876   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:20.771942   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:20.806484   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:21.146814   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:21.247446   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:21.270910   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:21.270942   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:21.306466   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:21.647281   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:21.771513   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:21.771558   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:21.807277   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:22.147385   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:22.271164   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:22.271209   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:22.306540   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:22.647278   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:22.771240   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:22.771335   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:22.807016   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:23.146448   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:23.271443   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:23.271490   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:23.306867   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:23.646916   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:23.747272   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:23.771578   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:23.771705   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:23.810456   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:24.147000   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:24.271132   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:24.271206   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:24.306976   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:24.646636   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:24.771408   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:24.771434   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:24.806800   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:25.146191   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:25.270889   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:25.270914   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:25.306894   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:25.646481   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:25.771431   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:25.771431   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:25.807079   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:26.146637   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:26.247263   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:26.270809   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:26.270881   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:26.306346   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:26.647053   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:26.770847   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:26.770869   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:26.806520   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:27.147435   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:27.271323   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:27.271420   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:27.306849   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:27.646379   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:27.771104   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:27.771225   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:27.806660   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:28.146344   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:28.271097   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:28.271253   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:28.306667   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:28.646234   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:28.746623   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:28.771164   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:28.771199   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:28.806721   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:29.147314   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:29.271558   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:29.271637   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:29.306281   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:29.647297   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:29.771240   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:29.771251   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:29.806565   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:30.147183   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:30.270841   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:30.270925   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:30.306302   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:30.646956   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:30.747447   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:30.770645   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:30.770806   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:30.806262   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:31.147043   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:31.271035   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:31.271054   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:31.306660   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:31.647251   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:31.771163   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:31.771227   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:31.806627   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:32.147406   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:32.271151   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:32.271211   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:32.306376   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:32.647135   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:32.770620   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:32.770710   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:32.806207   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:33.147071   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:33.246387   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:33.271053   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:33.271145   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:33.306552   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:33.647299   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:33.771039   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:33.771127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:33.806655   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:34.146371   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:34.271173   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:34.271195   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:34.307190   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:34.646268   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:34.770864   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:34.771025   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:34.806490   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:35.147127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1206 08:29:35.246558   10917 node_ready.go:57] node "addons-765040" has "Ready":"False" status (will retry)
	I1206 08:29:35.271045   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:35.271036   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:35.306336   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:35.647137   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:35.771038   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:35.771056   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:35.806656   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:36.147271   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:36.270964   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:36.270972   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:36.306495   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:36.647140   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:36.771096   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:36.771131   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:36.806646   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:37.147095   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:37.246300   10917 node_ready.go:49] node "addons-765040" is "Ready"
	I1206 08:29:37.246328   10917 node_ready.go:38] duration metric: took 40.50230852s for node "addons-765040" to be "Ready" ...
	I1206 08:29:37.246342   10917 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:29:37.246399   10917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:29:37.259736   10917 api_server.go:72] duration metric: took 40.986504637s to wait for apiserver process to appear ...
	I1206 08:29:37.259760   10917 api_server.go:88] waiting for apiserver healthz status ...
	I1206 08:29:37.259776   10917 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 08:29:37.264676   10917 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 08:29:37.265627   10917 api_server.go:141] control plane version: v1.34.2
	I1206 08:29:37.265654   10917 api_server.go:131] duration metric: took 5.887155ms to wait for apiserver health ...
	I1206 08:29:37.265665   10917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 08:29:37.269833   10917 system_pods.go:59] 20 kube-system pods found
	I1206 08:29:37.269884   10917 system_pods.go:61] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.269898   10917 system_pods.go:61] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:37.269915   10917 system_pods.go:61] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.269927   10917 system_pods.go:61] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.269938   10917 system_pods.go:61] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.269949   10917 system_pods.go:61] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.269956   10917 system_pods.go:61] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.269963   10917 system_pods.go:61] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.269969   10917 system_pods.go:61] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.269982   10917 system_pods.go:61] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.269999   10917 system_pods.go:61] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.270010   10917 system_pods.go:61] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.270019   10917 system_pods.go:61] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.270032   10917 system_pods.go:61] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.270044   10917 system_pods.go:61] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.270055   10917 system_pods.go:61] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.270063   10917 system_pods.go:61] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.270075   10917 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.270088   10917 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.270098   10917 system_pods.go:61] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:37.270107   10917 system_pods.go:74] duration metric: took 4.434661ms to wait for pod list to return data ...
	I1206 08:29:37.270119   10917 default_sa.go:34] waiting for default service account to be created ...
	I1206 08:29:37.270579   10917 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 08:29:37.270597   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:37.270706   10917 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 08:29:37.270722   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:37.272115   10917 default_sa.go:45] found service account: "default"
	I1206 08:29:37.272136   10917 default_sa.go:55] duration metric: took 2.009516ms for default service account to be created ...
	I1206 08:29:37.272146   10917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 08:29:37.277710   10917 system_pods.go:86] 20 kube-system pods found
	I1206 08:29:37.277745   10917 system_pods.go:89] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.277757   10917 system_pods.go:89] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:37.277766   10917 system_pods.go:89] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.277775   10917 system_pods.go:89] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.277792   10917 system_pods.go:89] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.277798   10917 system_pods.go:89] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.277805   10917 system_pods.go:89] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.277814   10917 system_pods.go:89] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.277820   10917 system_pods.go:89] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.277832   10917 system_pods.go:89] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.277840   10917 system_pods.go:89] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.277846   10917 system_pods.go:89] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.277857   10917 system_pods.go:89] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.277867   10917 system_pods.go:89] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.277879   10917 system_pods.go:89] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.277886   10917 system_pods.go:89] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.277897   10917 system_pods.go:89] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.277905   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.277915   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.277924   10917 system_pods.go:89] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:37.277942   10917 retry.go:31] will retry after 235.392267ms: missing components: kube-dns
	I1206 08:29:37.368622   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:37.517978   10917 system_pods.go:86] 20 kube-system pods found
	I1206 08:29:37.518032   10917 system_pods.go:89] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.518042   10917 system_pods.go:89] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:37.518051   10917 system_pods.go:89] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.518059   10917 system_pods.go:89] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.518076   10917 system_pods.go:89] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.518086   10917 system_pods.go:89] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.518092   10917 system_pods.go:89] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.518099   10917 system_pods.go:89] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.518106   10917 system_pods.go:89] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.518116   10917 system_pods.go:89] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.518124   10917 system_pods.go:89] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.518131   10917 system_pods.go:89] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.518139   10917 system_pods.go:89] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.518150   10917 system_pods.go:89] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.518161   10917 system_pods.go:89] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.518171   10917 system_pods.go:89] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.518183   10917 system_pods.go:89] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.518192   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.518206   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.518220   10917 system_pods.go:89] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:37.518242   10917 retry.go:31] will retry after 268.797227ms: missing components: kube-dns
	I1206 08:29:37.646917   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:37.775700   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:37.775931   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:37.797227   10917 system_pods.go:86] 20 kube-system pods found
	I1206 08:29:37.797264   10917 system_pods.go:89] "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:37.797272   10917 system_pods.go:89] "coredns-66bc5c9577-qjx25" [36e612b0-69c4-4247-a437-43a2fcdf950d] Running
	I1206 08:29:37.797283   10917 system_pods.go:89] "csi-hostpath-attacher-0" [01c3f146-28a2-47b5-a5bc-ca7d91d9021a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 08:29:37.797292   10917 system_pods.go:89] "csi-hostpath-resizer-0" [7264e7b1-3c31-40e6-a013-02677c390d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 08:29:37.797303   10917 system_pods.go:89] "csi-hostpathplugin-2bz69" [b2a9c9f1-c56e-4833-b5fc-208b2bb21af8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 08:29:37.797311   10917 system_pods.go:89] "etcd-addons-765040" [24481339-2633-4407-a115-aead2c19dd54] Running
	I1206 08:29:37.797317   10917 system_pods.go:89] "kindnet-v4khk" [0089bbea-3bfd-4a95-b3ed-766db95c31aa] Running
	I1206 08:29:37.797322   10917 system_pods.go:89] "kube-apiserver-addons-765040" [a4e598de-2604-4736-9548-dc9194ae94c5] Running
	I1206 08:29:37.797328   10917 system_pods.go:89] "kube-controller-manager-addons-765040" [927625b4-10d3-46ca-ae9c-3636ada9d821] Running
	I1206 08:29:37.797337   10917 system_pods.go:89] "kube-ingress-dns-minikube" [ed420759-e022-4b65-913d-c9fcc663e580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:37.797341   10917 system_pods.go:89] "kube-proxy-zbjfm" [fd65bed8-e182-429b-899b-1cf57feb776a] Running
	I1206 08:29:37.797347   10917 system_pods.go:89] "kube-scheduler-addons-765040" [429b1374-ed54-474f-8328-7f6b7fcde6f5] Running
	I1206 08:29:37.797356   10917 system_pods.go:89] "metrics-server-85b7d694d7-zrbd8" [45cd8cc8-871f-4c28-b5fb-6a042b9f441f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:37.797363   10917 system_pods.go:89] "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:37.797373   10917 system_pods.go:89] "registry-6b586f9694-cc7hl" [aeab1a5f-6caa-4183-9dc0-b1c92e9bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:37.797381   10917 system_pods.go:89] "registry-creds-764b6fb674-jxk6v" [f3523040-0131-4851-92f4-25b2922d4fc7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:37.797390   10917 system_pods.go:89] "registry-proxy-62qx6" [a5766114-0431-447e-b362-4c2e9c2ce565] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:37.797397   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcq6s" [e6c8be0f-a4c4-45b7-9044-734ef566b871] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.797407   10917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbvlw" [f87ef71e-a62f-4177-94e0-c0acfd83cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 08:29:37.797412   10917 system_pods.go:89] "storage-provisioner" [4ac79d82-c2b3-4299-8a78-a8cd76fdc35f] Running
	I1206 08:29:37.797422   10917 system_pods.go:126] duration metric: took 525.270292ms to wait for k8s-apps to be running ...
	I1206 08:29:37.797432   10917 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 08:29:37.797483   10917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:29:37.825813   10917 system_svc.go:56] duration metric: took 28.373343ms WaitForService to wait for kubelet
	I1206 08:29:37.825847   10917 kubeadm.go:587] duration metric: took 41.552620172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:29:37.825870   10917 node_conditions.go:102] verifying NodePressure condition ...
	I1206 08:29:37.828891   10917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 08:29:37.828922   10917 node_conditions.go:123] node cpu capacity is 8
	I1206 08:29:37.828939   10917 node_conditions.go:105] duration metric: took 3.062685ms to run NodePressure ...
	I1206 08:29:37.828953   10917 start.go:242] waiting for startup goroutines ...
	I1206 08:29:37.875074   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:38.146887   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:38.271626   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:38.271640   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:38.307582   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:38.647930   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:38.772386   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:38.772451   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:38.873411   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:39.147256   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:39.271286   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:39.271371   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:39.307106   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:39.647785   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:39.773604   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:39.773898   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:39.831519   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:40.148694   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:40.271639   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:40.272020   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:40.306932   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:40.646847   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:40.772205   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:40.772261   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:40.807358   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:41.147512   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:41.271459   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:41.271642   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:41.307571   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:41.647851   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:41.774102   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:41.774324   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:41.808333   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:42.148976   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:42.272456   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:42.272980   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:42.307092   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:42.646821   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:42.772064   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:42.772072   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:42.806881   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:43.146863   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:43.271716   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:43.271957   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:43.307238   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:43.695490   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:43.771257   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:43.771344   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:43.809292   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:44.147170   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:44.271963   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:44.272476   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:44.307101   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:44.647012   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:44.772256   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:44.772316   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:44.807304   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:45.146711   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:45.271715   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:45.271844   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:45.307139   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:45.697016   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:45.822084   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:45.822123   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:45.822166   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:46.147658   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:46.272620   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:46.273294   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:46.307851   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:46.646843   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:46.772159   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:46.772489   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:46.806767   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:47.146728   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:47.271299   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:47.271385   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:47.306692   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:47.647127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:47.772267   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:47.772373   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:47.807158   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:48.147231   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:48.272133   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:48.272297   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:48.307128   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:48.647127   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:48.772237   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:48.772278   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:48.806913   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:49.146732   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:49.271716   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:49.271917   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:49.306622   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:49.647753   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:49.771892   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:49.772063   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:49.807251   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:50.146890   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:50.271830   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:50.272095   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:50.306922   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:50.647739   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:50.771913   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:50.772065   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:50.842492   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:51.147977   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:51.271464   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:51.271624   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:51.307673   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:51.648044   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:51.772314   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:51.772356   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:51.807034   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:52.146927   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:52.271945   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:52.272036   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:52.372592   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:52.648201   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:52.771236   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:52.771550   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:52.807470   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:53.147740   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:53.271345   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:53.271523   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:53.306875   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:53.647142   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:53.773201   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:53.773327   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:53.807145   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:54.146605   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:54.271572   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:54.271835   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:54.306150   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:54.646612   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:54.771792   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:54.771838   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:54.807312   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:55.147350   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:55.271384   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:55.271509   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:55.307232   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:55.646838   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:55.772179   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:55.772222   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:55.806976   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:56.148324   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:56.271000   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:56.271308   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:56.307629   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:56.647897   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:56.771867   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:56.772077   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:56.806962   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:57.146894   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:57.271925   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:57.271979   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:57.373000   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:57.647050   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:57.771971   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:57.772134   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:57.806928   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:58.146765   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:58.271857   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:58.271922   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:58.372753   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:58.646422   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:58.770796   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:58.770960   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:58.806590   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.147835   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:59.271752   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:59.271862   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:59.306548   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.647794   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:29:59.771809   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:59.771947   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:59.806704   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:00.147779   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:00.271462   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:00.271624   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.306935   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:00.647016   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:00.771714   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:00.771764   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.806213   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:01.147404   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:01.271134   10917 kapi.go:107] duration metric: took 1m3.002946556s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 08:30:01.271183   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:01.306601   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:01.648044   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:01.772493   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:01.808643   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:02.148107   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:02.272110   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:02.306874   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:02.647412   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:02.771675   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:02.807643   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:03.147695   10917 kapi.go:107] duration metric: took 58.503812966s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 08:30:03.149198   10917 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-765040 cluster.
	I1206 08:30:03.150474   10917 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 08:30:03.151650   10917 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 08:30:03.272643   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:03.308372   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:03.771966   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:03.807946   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:04.272077   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:04.307974   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:04.772470   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:04.807306   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:05.271334   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:05.307128   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:05.772507   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:05.872669   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:06.271651   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:06.307927   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:06.772359   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:06.807259   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:07.271716   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:07.372111   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:07.772141   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:07.807063   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:08.272147   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:08.306320   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:08.771596   10917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:08.807215   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:09.272235   10917 kapi.go:107] duration metric: took 1m11.004038288s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 08:30:09.306663   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:09.807152   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:10.344706   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:10.806842   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:11.307647   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:11.807767   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:12.306980   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:12.807574   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:13.306750   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:13.807064   10917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:14.307441   10917 kapi.go:107] duration metric: took 1m16.503920461s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 08:30:14.309252   10917 out.go:179] * Enabled addons: registry-creds, cloud-spanner, nvidia-device-plugin, ingress-dns, inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1206 08:30:14.310517   10917 addons.go:530] duration metric: took 1m18.037246923s for enable addons: enabled=[registry-creds cloud-spanner nvidia-device-plugin ingress-dns inspektor-gadget amd-gpu-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1206 08:30:14.310560   10917 start.go:247] waiting for cluster config update ...
	I1206 08:30:14.310579   10917 start.go:256] writing updated cluster config ...
	I1206 08:30:14.310909   10917 ssh_runner.go:195] Run: rm -f paused
	I1206 08:30:14.314945   10917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 08:30:14.318704   10917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qjx25" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.323423   10917 pod_ready.go:94] pod "coredns-66bc5c9577-qjx25" is "Ready"
	I1206 08:30:14.323446   10917 pod_ready.go:86] duration metric: took 4.713787ms for pod "coredns-66bc5c9577-qjx25" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.325284   10917 pod_ready.go:83] waiting for pod "etcd-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.328574   10917 pod_ready.go:94] pod "etcd-addons-765040" is "Ready"
	I1206 08:30:14.328597   10917 pod_ready.go:86] duration metric: took 3.287521ms for pod "etcd-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.330428   10917 pod_ready.go:83] waiting for pod "kube-apiserver-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.333679   10917 pod_ready.go:94] pod "kube-apiserver-addons-765040" is "Ready"
	I1206 08:30:14.333703   10917 pod_ready.go:86] duration metric: took 3.251271ms for pod "kube-apiserver-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.335392   10917 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.718466   10917 pod_ready.go:94] pod "kube-controller-manager-addons-765040" is "Ready"
	I1206 08:30:14.718500   10917 pod_ready.go:86] duration metric: took 383.085368ms for pod "kube-controller-manager-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:14.918615   10917 pod_ready.go:83] waiting for pod "kube-proxy-zbjfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.319167   10917 pod_ready.go:94] pod "kube-proxy-zbjfm" is "Ready"
	I1206 08:30:15.319196   10917 pod_ready.go:86] duration metric: took 400.552236ms for pod "kube-proxy-zbjfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.519372   10917 pod_ready.go:83] waiting for pod "kube-scheduler-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.918836   10917 pod_ready.go:94] pod "kube-scheduler-addons-765040" is "Ready"
	I1206 08:30:15.918867   10917 pod_ready.go:86] duration metric: took 399.469373ms for pod "kube-scheduler-addons-765040" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:30:15.918878   10917 pod_ready.go:40] duration metric: took 1.603910103s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 08:30:15.963113   10917 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 08:30:15.964918   10917 out.go:179] * Done! kubectl is now configured to use "addons-765040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.136270164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.137212743Z" level=info msg="Ran pod sandbox bacab4d6567cece3be818a0dc9de3d0d21d3d9c73270343f268f959fcf7dd4d7 with infra container: local-path-storage/helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14/POD" id=a28ea346-7cb6-476f-9324-04e7d62e4a53 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.13837781Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=9b393e3d-7fb2-4f76-be62-7840bcc632f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.140095224Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=27c82859-cc3b-4c17-9e6a-be6df1461ea1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.14612902Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14/helper-pod" id=5bb937b5-8a65-4e69-86dc-1e42f689729d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.146292706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.154600482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.155182962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.191736769Z" level=info msg="Created container 16d78ab3f908abbcf66c32d7e27c0ddaa0a9aad410bec7c2829aa560355a75fb: local-path-storage/helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14/helper-pod" id=5bb937b5-8a65-4e69-86dc-1e42f689729d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.192402198Z" level=info msg="Starting container: 16d78ab3f908abbcf66c32d7e27c0ddaa0a9aad410bec7c2829aa560355a75fb" id=ac068ee1-9421-4fff-93c4-da70964cf7b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 08:30:42 addons-765040 crio[770]: time="2025-12-06T08:30:42.194579592Z" level=info msg="Started container" PID=7581 containerID=16d78ab3f908abbcf66c32d7e27c0ddaa0a9aad410bec7c2829aa560355a75fb description=local-path-storage/helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14/helper-pod id=ac068ee1-9421-4fff-93c4-da70964cf7b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bacab4d6567cece3be818a0dc9de3d0d21d3d9c73270343f268f959fcf7dd4d7
	Dec 06 08:30:44 addons-765040 crio[770]: time="2025-12-06T08:30:44.137065625Z" level=info msg="Stopping pod sandbox: bacab4d6567cece3be818a0dc9de3d0d21d3d9c73270343f268f959fcf7dd4d7" id=4acbf4b8-8410-4347-b49d-95a950220b22 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 08:30:44 addons-765040 crio[770]: time="2025-12-06T08:30:44.13734693Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14 Namespace:local-path-storage ID:bacab4d6567cece3be818a0dc9de3d0d21d3d9c73270343f268f959fcf7dd4d7 UID:a379e73d-f097-4bb6-bce5-bdf61312da1c NetNS:/var/run/netns/e3a250f6-596b-4beb-af2e-46202b7bdcd8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012a6d8}] Aliases:map[]}"
	Dec 06 08:30:44 addons-765040 crio[770]: time="2025-12-06T08:30:44.137472709Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14 from CNI network \"kindnet\" (type=ptp)"
	Dec 06 08:30:44 addons-765040 crio[770]: time="2025-12-06T08:30:44.154582769Z" level=info msg="Stopped pod sandbox: bacab4d6567cece3be818a0dc9de3d0d21d3d9c73270343f268f959fcf7dd4d7" id=4acbf4b8-8410-4347-b49d-95a950220b22 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 08:30:45 addons-765040 crio[770]: time="2025-12-06T08:30:45.143272857Z" level=info msg="Removing container: 16d78ab3f908abbcf66c32d7e27c0ddaa0a9aad410bec7c2829aa560355a75fb" id=bceee881-e9d4-4767-863c-2371c9646a12 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 08:30:45 addons-765040 crio[770]: time="2025-12-06T08:30:45.150193263Z" level=info msg="Removed container 16d78ab3f908abbcf66c32d7e27c0ddaa0a9aad410bec7c2829aa560355a75fb: local-path-storage/helper-pod-delete-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14/helper-pod" id=bceee881-e9d4-4767-863c-2371c9646a12 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 08:30:47 addons-765040 crio[770]: time="2025-12-06T08:30:47.539894743Z" level=info msg="Stopping container: d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599 (timeout: 30s)" id=c75ef9d1-39db-4e49-ab32-3c8033e21a1c name=/runtime.v1.RuntimeService/StopContainer
	Dec 06 08:30:47 addons-765040 crio[770]: time="2025-12-06T08:30:47.649129026Z" level=info msg="Stopped container d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599: default/task-pv-pod/task-pv-container" id=c75ef9d1-39db-4e49-ab32-3c8033e21a1c name=/runtime.v1.RuntimeService/StopContainer
	Dec 06 08:30:47 addons-765040 crio[770]: time="2025-12-06T08:30:47.649811661Z" level=info msg="Stopping pod sandbox: ff024d64abde92a07a55d7ccce4bc4b7d63ec15a7c9c21e354d11275c18fd394" id=a6123798-1c95-4e3f-afaa-bf16476267c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 08:30:47 addons-765040 crio[770]: time="2025-12-06T08:30:47.650112521Z" level=info msg="Got pod network &{Name:task-pv-pod Namespace:default ID:ff024d64abde92a07a55d7ccce4bc4b7d63ec15a7c9c21e354d11275c18fd394 UID:4da70611-54f8-43cc-99f4-18fd179af769 NetNS:/var/run/netns/fb1b68e9-962c-4297-9e17-9baab16fcbb5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012ab48}] Aliases:map[]}"
	Dec 06 08:30:47 addons-765040 crio[770]: time="2025-12-06T08:30:47.650282091Z" level=info msg="Deleting pod default_task-pv-pod from CNI network \"kindnet\" (type=ptp)"
	Dec 06 08:30:47 addons-765040 crio[770]: time="2025-12-06T08:30:47.67169447Z" level=info msg="Stopped pod sandbox: ff024d64abde92a07a55d7ccce4bc4b7d63ec15a7c9c21e354d11275c18fd394" id=a6123798-1c95-4e3f-afaa-bf16476267c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 08:30:48 addons-765040 crio[770]: time="2025-12-06T08:30:48.158745875Z" level=info msg="Removing container: d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599" id=3aba9ceb-71b6-4c62-a83b-d532f1c73782 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 08:30:48 addons-765040 crio[770]: time="2025-12-06T08:30:48.166448685Z" level=info msg="Removed container d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599: default/task-pv-pod/task-pv-container" id=3aba9ceb-71b6-4c62-a83b-d532f1c73782 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	cdba2594455ea       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             6 seconds ago        Running             registry-creds                           0                   4c0dc7fbc2bdb       registry-creds-764b6fb674-jxk6v                              kube-system
	f2ee3e4aef2c9       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            10 seconds ago       Exited              busybox                                  0                   606353a087094       test-local-path                                              default
	718aeb37c90c3       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          10 seconds ago       Exited              registry-test                            0                   151f5dd3c4168       registry-test                                                default
	3f62fb4800cbc       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            14 seconds ago       Exited              helper-pod                               0                   96a2c6ea72527       helper-pod-create-pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14   local-path-storage
	119aa8c250859       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              15 seconds ago       Running             nginx                                    0                   aa17ab66d9bb7       nginx                                                        default
	5d89ccab7bf00       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          30 seconds ago       Running             busybox                                  0                   e1e54352ec8d1       busybox                                                      default
	375681b28101f       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             35 seconds ago       Running             controller                               0                   73af3261e197f       ingress-nginx-controller-85d4c799dd-k228z                    ingress-nginx
	465f1ce06f190       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             35 seconds ago       Exited              patch                                    2                   3ec637d5437e4       ingress-nginx-admission-patch-f6h26                          ingress-nginx
	fa9f9971a7530       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          40 seconds ago       Running             csi-snapshotter                          0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                                     kube-system
	b9fb9ebbc4e81       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          41 seconds ago       Running             csi-provisioner                          0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                                     kube-system
	673c8f827d57b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            42 seconds ago       Running             liveness-probe                           0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                                     kube-system
	4396f604e4ece       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           42 seconds ago       Running             hostpath                                 0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                                     kube-system
	a1b45309ac30c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            43 seconds ago       Running             gadget                                   0                   57851c0aec553       gadget-qs29w                                                 gadget
	1be3dd273f2cc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                45 seconds ago       Running             node-driver-registrar                    0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                                     kube-system
	e473ff4646804       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 46 seconds ago       Running             gcp-auth                                 0                   0609a5d39a54f       gcp-auth-78565c9fb4-jjvdb                                    gcp-auth
	885f97664324b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              47 seconds ago       Running             registry-proxy                           0                   6cc8b635c7337       registry-proxy-62qx6                                         kube-system
	d68a7f31cdd6c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      50 seconds ago       Running             volume-snapshot-controller               0                   77bb6fa77ec91       snapshot-controller-7d9fbc56b8-wbvlw                         kube-system
	6a470b7cce4a2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   50 seconds ago       Running             csi-external-health-monitor-controller   0                   4b1bdc155c7b6       csi-hostpathplugin-2bz69                                     kube-system
	6ee534b6b0267       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     51 seconds ago       Running             nvidia-device-plugin-ctr                 0                   11ff803294b05       nvidia-device-plugin-daemonset-rxnr5                         kube-system
	94882f78bdf8a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     54 seconds ago       Running             amd-gpu-device-plugin                    0                   14ae37751bb36       amd-gpu-device-plugin-vdlbw                                  kube-system
	a19b0e90b5613       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   54 seconds ago       Exited              create                                   0                   b5d56424517c7       ingress-nginx-admission-create-xh7gb                         ingress-nginx
	354d0a526c419       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              55 seconds ago       Running             csi-resizer                              0                   686608e91534d       csi-hostpath-resizer-0                                       kube-system
	4ec65fdece7da       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             55 seconds ago       Exited              patch                                    1                   b6b732a1e43ac       gcp-auth-certs-patch-nfbpt                                   gcp-auth
	cc849afb7d2ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   56 seconds ago       Exited              create                                   0                   d3b67fa50203a       gcp-auth-certs-create-8cbjq                                  gcp-auth
	aeeba8f92bca8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             57 seconds ago       Running             local-path-provisioner                   0                   51dfd9569bb03       local-path-provisioner-648f6765c9-hk7zm                      local-path-storage
	d077eaf71426e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      57 seconds ago       Running             volume-snapshot-controller               0                   cb2c2964b5bda       snapshot-controller-7d9fbc56b8-pcq6s                         kube-system
	8c377935e6b6a       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               58 seconds ago       Running             cloud-spanner-emulator                   0                   85a69ad9a0a91       cloud-spanner-emulator-5bdddb765-82xzx                       default
	698b578cb2d85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   2d1740c8e7633       csi-hostpath-attacher-0                                      kube-system
	0cb647e368899       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   d3ba6e7892f3d       kube-ingress-dns-minikube                                    kube-system
	5f25ed43f715b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   c1ec8a2800152       yakd-dashboard-5ff678cb9-m627q                               yakd-dashboard
	1e8b8db988d1b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   5884350e2a300       metrics-server-85b7d694d7-zrbd8                              kube-system
	18c6142b8428d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   9a6291b670694       registry-6b586f9694-cc7hl                                    kube-system
	248f63d58002c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   0ef70e92773f8       storage-provisioner                                          kube-system
	7f04ddcba299d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   33c79b605f13d       coredns-66bc5c9577-qjx25                                     kube-system
	88a0bf3b6769d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   5ed02bf7fa615       kindnet-v4khk                                                kube-system
	c9ca4911d0b8a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   7ca58eebd5a52       kube-proxy-zbjfm                                             kube-system
	ac0e422b4a248       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             2 minutes ago        Running             kube-apiserver                           0                   291defde480de       kube-apiserver-addons-765040                                 kube-system
	9164f996c22b8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             2 minutes ago        Running             etcd                                     0                   841dd4e78402c       etcd-addons-765040                                           kube-system
	c72849d4fdd71       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             2 minutes ago        Running             kube-controller-manager                  0                   bbf3390ea962c       kube-controller-manager-addons-765040                        kube-system
	a5a7b4678c49d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             2 minutes ago        Running             kube-scheduler                           0                   02c34c53efe3a       kube-scheduler-addons-765040                                 kube-system
	
	
	==> coredns [7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5] <==
	[INFO] 10.244.0.20:35534 - 52219 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119171s
	[INFO] 10.244.0.20:42723 - 7065 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00819394s
	[INFO] 10.244.0.20:52803 - 13048 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008959808s
	[INFO] 10.244.0.20:41906 - 16255 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006004307s
	[INFO] 10.244.0.20:58772 - 57174 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008217808s
	[INFO] 10.244.0.20:54584 - 58165 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004600849s
	[INFO] 10.244.0.20:44435 - 34526 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007939785s
	[INFO] 10.244.0.20:60426 - 41686 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000874225s
	[INFO] 10.244.0.20:38647 - 33553 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001385594s
	[INFO] 10.244.0.26:43307 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238592s
	[INFO] 10.244.0.26:59829 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016478s
	[INFO] 10.244.0.29:36266 - 53317 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000207884s
	[INFO] 10.244.0.29:50279 - 15933 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000301833s
	[INFO] 10.244.0.29:45950 - 5931 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000119072s
	[INFO] 10.244.0.29:51250 - 43476 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000193508s
	[INFO] 10.244.0.29:33493 - 23128 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00011734s
	[INFO] 10.244.0.29:43854 - 24681 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000141369s
	[INFO] 10.244.0.29:43632 - 56752 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007212626s
	[INFO] 10.244.0.29:48614 - 58305 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.008022412s
	[INFO] 10.244.0.29:40797 - 25353 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00551143s
	[INFO] 10.244.0.29:36691 - 53021 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005676789s
	[INFO] 10.244.0.29:36643 - 39000 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005916016s
	[INFO] 10.244.0.29:44779 - 57843 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.007322186s
	[INFO] 10.244.0.29:38877 - 39172 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001647099s
	[INFO] 10.244.0.29:44837 - 17304 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002614849s
	
	
	==> describe nodes <==
	Name:               addons-765040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-765040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=addons-765040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T08_28_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-765040
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-765040"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 08:28:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-765040
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 08:30:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 08:30:43 +0000   Sat, 06 Dec 2025 08:28:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 08:30:43 +0000   Sat, 06 Dec 2025 08:28:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 08:30:43 +0000   Sat, 06 Dec 2025 08:28:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 08:30:43 +0000   Sat, 06 Dec 2025 08:29:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-765040
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                83923667-5335-4e95-b76a-aad86daca2a8
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     cloud-spanner-emulator-5bdddb765-82xzx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  gadget                      gadget-qs29w                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  gcp-auth                    gcp-auth-78565c9fb4-jjvdb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-k228z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         111s
	  kube-system                 amd-gpu-device-plugin-vdlbw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 coredns-66bc5c9577-qjx25                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpathplugin-2bz69                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 etcd-addons-765040                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-v4khk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-addons-765040                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-addons-765040        200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-zbjfm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-addons-765040                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 metrics-server-85b7d694d7-zrbd8              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         111s
	  kube-system                 nvidia-device-plugin-daemonset-rxnr5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 registry-6b586f9694-cc7hl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 registry-creds-764b6fb674-jxk6v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-proxy-62qx6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 snapshot-controller-7d9fbc56b8-pcq6s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-7d9fbc56b8-wbvlw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  local-path-storage          local-path-provisioner-648f6765c9-hk7zm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-m627q               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 111s  kube-proxy       
	  Normal  Starting                 118s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s  kubelet          Node addons-765040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s  kubelet          Node addons-765040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s  kubelet          Node addons-765040 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           113s  node-controller  Node addons-765040 event: Registered Node addons-765040 in Controller
	  Normal  NodeReady                72s   kubelet          Node addons-765040 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 6 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001866] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.416979] i8042: Warning: Keylock active
	[  +0.008715] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.515422] block sda: the capability attribute has been deprecated.
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093] <==
	{"level":"warn","ts":"2025-12-06T08:28:47.524178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.532133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.539285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.548202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.555875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.563259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.569463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.577136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.584127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.595437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.603587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.610193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.617106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.636169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.642675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:47.649615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:58.802523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:28:58.809375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:29:25.117133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:29:25.124582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:29:25.144930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T08:29:45.695462Z","caller":"traceutil/trace.go:172","msg":"trace[50646713] linearizableReadLoop","detail":"{readStateIndex:1013; appliedIndex:1013; }","duration":"143.380855ms","start":"2025-12-06T08:29:45.552063Z","end":"2025-12-06T08:29:45.695444Z","steps":["trace[50646713] 'read index received'  (duration: 143.374553ms)","trace[50646713] 'applied index is now lower than readState.Index'  (duration: 4.846µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T08:29:45.695580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.505785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:29:45.695625Z","caller":"traceutil/trace.go:172","msg":"trace[1926135458] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:985; }","duration":"143.565896ms","start":"2025-12-06T08:29:45.552054Z","end":"2025-12-06T08:29:45.695620Z","steps":["trace[1926135458] 'agreement among raft nodes before linearized reading'  (duration: 143.479927ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:29:45.695697Z","caller":"traceutil/trace.go:172","msg":"trace[1676551139] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"154.44511ms","start":"2025-12-06T08:29:45.541233Z","end":"2025-12-06T08:29:45.695678Z","steps":["trace[1676551139] 'process raft request'  (duration: 154.275401ms)"],"step_count":1}
	
	
	==> gcp-auth [e473ff4646804b1d1dbe5234b3b5cf91c9ceacf662f64eb9094c2601c86e59b2] <==
	2025/12/06 08:30:02 GCP Auth Webhook started!
	2025/12/06 08:30:16 Ready to marshal response ...
	2025/12/06 08:30:16 Ready to write response ...
	2025/12/06 08:30:16 Ready to marshal response ...
	2025/12/06 08:30:16 Ready to write response ...
	2025/12/06 08:30:16 Ready to marshal response ...
	2025/12/06 08:30:16 Ready to write response ...
	2025/12/06 08:30:31 Ready to marshal response ...
	2025/12/06 08:30:31 Ready to write response ...
	2025/12/06 08:30:31 Ready to marshal response ...
	2025/12/06 08:30:31 Ready to write response ...
	2025/12/06 08:30:31 Ready to marshal response ...
	2025/12/06 08:30:31 Ready to write response ...
	2025/12/06 08:30:36 Ready to marshal response ...
	2025/12/06 08:30:36 Ready to write response ...
	2025/12/06 08:30:38 Ready to marshal response ...
	2025/12/06 08:30:38 Ready to write response ...
	2025/12/06 08:30:41 Ready to marshal response ...
	2025/12/06 08:30:41 Ready to write response ...
	
	
	==> kernel <==
	 08:30:48 up 13 min,  0 user,  load average: 1.40, 0.69, 0.26
	Linux addons-765040 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f] <==
	I1206 08:28:56.746463       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 08:28:56.747053       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 08:29:26.747234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 08:29:26.747233       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 08:29:26.748417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 08:29:26.748539       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 08:29:28.347557       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 08:29:28.347586       1 metrics.go:72] Registering metrics
	I1206 08:29:28.347630       1 controller.go:711] "Syncing nftables rules"
	I1206 08:29:36.670773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:29:36.670834       1 main.go:301] handling current node
	I1206 08:29:46.670964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:29:46.671104       1 main.go:301] handling current node
	I1206 08:29:56.670487       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:29:56.670518       1 main.go:301] handling current node
	I1206 08:30:06.671159       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:30:06.671198       1 main.go:301] handling current node
	I1206 08:30:16.670916       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:30:16.670956       1 main.go:301] handling current node
	I1206 08:30:26.670877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:30:26.670914       1 main.go:301] handling current node
	I1206 08:30:36.670496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:30:36.670525       1 main.go:301] handling current node
	I1206 08:30:46.671541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 08:30:46.671579       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 08:29:41.857057       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.859130       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.863904       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.885103       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:41.927305       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:42.008945       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	E1206 08:29:42.170341       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.82.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.82.215:443: connect: connection refused" logger="UnhandledError"
	W1206 08:29:42.857518       1 handler_proxy.go:99] no RequestInfo found in the context
	W1206 08:29:42.857548       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 08:29:42.857585       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1206 08:29:42.857608       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1206 08:29:42.857610       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 08:29:42.858739       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 08:29:43.584752       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 08:30:24.636175       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38666: use of closed network connection
	E1206 08:30:24.783021       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38698: use of closed network connection
	I1206 08:30:31.550741       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 08:30:31.724648       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.246.28"}
	I1206 08:30:46.408801       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda] <==
	I1206 08:28:55.101874       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 08:28:55.102348       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 08:28:55.102376       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 08:28:55.102643       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 08:28:55.103679       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 08:28:55.104902       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 08:28:55.104921       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 08:28:55.105006       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 08:28:55.105056       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 08:28:55.105062       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 08:28:55.105069       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 08:28:55.107915       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 08:28:55.110273       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 08:28:55.112941       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-765040" podCIDRs=["10.244.0.0/24"]
	I1206 08:28:55.116147       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 08:28:55.126616       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 08:28:57.546211       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 08:29:25.111763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 08:29:25.111898       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 08:29:25.111935       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 08:29:25.135434       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 08:29:25.139109       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 08:29:25.212263       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 08:29:25.239949       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 08:29:40.056911       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc] <==
	I1206 08:28:56.215030       1 server_linux.go:53] "Using iptables proxy"
	I1206 08:28:56.293538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 08:28:56.404192       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 08:28:56.404486       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 08:28:56.404586       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 08:28:56.576236       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 08:28:56.576303       1 server_linux.go:132] "Using iptables Proxier"
	I1206 08:28:56.585389       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 08:28:56.591758       1 server.go:527] "Version info" version="v1.34.2"
	I1206 08:28:56.591792       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 08:28:56.595682       1 config.go:200] "Starting service config controller"
	I1206 08:28:56.595709       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 08:28:56.595739       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 08:28:56.595754       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 08:28:56.595777       1 config.go:106] "Starting endpoint slice config controller"
	I1206 08:28:56.595783       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 08:28:56.595958       1 config.go:309] "Starting node config controller"
	I1206 08:28:56.596032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 08:28:56.596086       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 08:28:56.695904       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 08:28:56.698067       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 08:28:56.698096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d] <==
	E1206 08:28:48.103545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:28:48.103659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:28:48.103680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 08:28:48.103798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:28:48.103842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 08:28:48.103872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:28:48.104030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:28:48.104083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:28:48.104091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 08:28:48.104090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:28:48.104176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 08:28:48.104179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 08:28:48.104190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:28:48.104179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 08:28:48.104243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 08:28:48.942382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:28:48.951502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:28:48.979909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:28:49.090391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:28:49.136006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 08:28:49.144974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 08:28:49.269342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:28:49.312596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:28:49.317589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1206 08:28:49.701588       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.323806    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a379e73d-f097-4bb6-bce5-bdf61312da1c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a379e73d-f097-4bb6-bce5-bdf61312da1c" (UID: "a379e73d-f097-4bb6-bce5-bdf61312da1c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.323888    1295 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a379e73d-f097-4bb6-bce5-bdf61312da1c-gcp-creds\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.323908    1295 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a379e73d-f097-4bb6-bce5-bdf61312da1c-data\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.324212    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a379e73d-f097-4bb6-bce5-bdf61312da1c-script" (OuterVolumeSpecName: "script") pod "a379e73d-f097-4bb6-bce5-bdf61312da1c" (UID: "a379e73d-f097-4bb6-bce5-bdf61312da1c"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.325809    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a379e73d-f097-4bb6-bce5-bdf61312da1c-kube-api-access-gzjqj" (OuterVolumeSpecName: "kube-api-access-gzjqj") pod "a379e73d-f097-4bb6-bce5-bdf61312da1c" (UID: "a379e73d-f097-4bb6-bce5-bdf61312da1c"). InnerVolumeSpecName "kube-api-access-gzjqj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.425106    1295 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a379e73d-f097-4bb6-bce5-bdf61312da1c-script\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.425155    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gzjqj\" (UniqueName: \"kubernetes.io/projected/a379e73d-f097-4bb6-bce5-bdf61312da1c-kube-api-access-gzjqj\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:44 addons-765040 kubelet[1295]: I1206 08:30:44.644571    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a379e73d-f097-4bb6-bce5-bdf61312da1c" path="/var/lib/kubelet/pods/a379e73d-f097-4bb6-bce5-bdf61312da1c/volumes"
	Dec 06 08:30:45 addons-765040 kubelet[1295]: I1206 08:30:45.142059    1295 scope.go:117] "RemoveContainer" containerID="16d78ab3f908abbcf66c32d7e27c0ddaa0a9aad410bec7c2829aa560355a75fb"
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.748849    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4da70611-54f8-43cc-99f4-18fd179af769-gcp-creds\") pod \"4da70611-54f8-43cc-99f4-18fd179af769\" (UID: \"4da70611-54f8-43cc-99f4-18fd179af769\") "
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.748966    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da70611-54f8-43cc-99f4-18fd179af769-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4da70611-54f8-43cc-99f4-18fd179af769" (UID: "4da70611-54f8-43cc-99f4-18fd179af769"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.749015    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^d7ccb91c-d27d-11f0-849d-2ef9a20420a0\") pod \"4da70611-54f8-43cc-99f4-18fd179af769\" (UID: \"4da70611-54f8-43cc-99f4-18fd179af769\") "
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.749106    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vvgx\" (UniqueName: \"kubernetes.io/projected/4da70611-54f8-43cc-99f4-18fd179af769-kube-api-access-2vvgx\") pod \"4da70611-54f8-43cc-99f4-18fd179af769\" (UID: \"4da70611-54f8-43cc-99f4-18fd179af769\") "
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.749242    1295 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4da70611-54f8-43cc-99f4-18fd179af769-gcp-creds\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.751337    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da70611-54f8-43cc-99f4-18fd179af769-kube-api-access-2vvgx" (OuterVolumeSpecName: "kube-api-access-2vvgx") pod "4da70611-54f8-43cc-99f4-18fd179af769" (UID: "4da70611-54f8-43cc-99f4-18fd179af769"). InnerVolumeSpecName "kube-api-access-2vvgx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.752553    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^d7ccb91c-d27d-11f0-849d-2ef9a20420a0" (OuterVolumeSpecName: "task-pv-storage") pod "4da70611-54f8-43cc-99f4-18fd179af769" (UID: "4da70611-54f8-43cc-99f4-18fd179af769"). InnerVolumeSpecName "pvc-7f5b5e32-f04e-4839-8fc3-ab896753cc1c". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.849779    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vvgx\" (UniqueName: \"kubernetes.io/projected/4da70611-54f8-43cc-99f4-18fd179af769-kube-api-access-2vvgx\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.849837    1295 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-7f5b5e32-f04e-4839-8fc3-ab896753cc1c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^d7ccb91c-d27d-11f0-849d-2ef9a20420a0\") on node \"addons-765040\" "
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.855343    1295 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-7f5b5e32-f04e-4839-8fc3-ab896753cc1c" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^d7ccb91c-d27d-11f0-849d-2ef9a20420a0") on node "addons-765040"
	Dec 06 08:30:47 addons-765040 kubelet[1295]: I1206 08:30:47.950351    1295 reconciler_common.go:299] "Volume detached for volume \"pvc-7f5b5e32-f04e-4839-8fc3-ab896753cc1c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^d7ccb91c-d27d-11f0-849d-2ef9a20420a0\") on node \"addons-765040\" DevicePath \"\""
	Dec 06 08:30:48 addons-765040 kubelet[1295]: I1206 08:30:48.157423    1295 scope.go:117] "RemoveContainer" containerID="d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599"
	Dec 06 08:30:48 addons-765040 kubelet[1295]: I1206 08:30:48.166788    1295 scope.go:117] "RemoveContainer" containerID="d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599"
	Dec 06 08:30:48 addons-765040 kubelet[1295]: E1206 08:30:48.167467    1295 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599\": container with ID starting with d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599 not found: ID does not exist" containerID="d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599"
	Dec 06 08:30:48 addons-765040 kubelet[1295]: I1206 08:30:48.167528    1295 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599"} err="failed to get container status \"d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599\": rpc error: code = NotFound desc = could not find container \"d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599\": container with ID starting with d180ef4d1c5f19de5bf72f1df7cff0ed5786d5ff2a98178eb0090069ab4ca599 not found: ID does not exist"
	Dec 06 08:30:48 addons-765040 kubelet[1295]: I1206 08:30:48.643548    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da70611-54f8-43cc-99f4-18fd179af769" path="/var/lib/kubelet/pods/4da70611-54f8-43cc-99f4-18fd179af769/volumes"
	
	
	==> storage-provisioner [248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762] <==
	W1206 08:30:23.883561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:25.886049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:25.890863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:27.893647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:27.897398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:29.899840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:29.903180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:31.906168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:31.909931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:33.913033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:33.917921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:35.920618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:35.924182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:37.927540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:37.933525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:39.936361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:39.940227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:41.943579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:41.947794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:43.950514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:43.954042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:45.957587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:45.961065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:47.964673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:30:47.968497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-765040 -n addons-765040
helpers_test.go:269: (dbg) Run:  kubectl --context addons-765040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-765040 describe pod ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-765040 describe pod ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26: exit status 1 (57.227104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xh7gb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f6h26" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-765040 describe pod ingress-nginx-admission-create-xh7gb ingress-nginx-admission-patch-f6h26: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable headlamp --alsologtostderr -v=1: exit status 11 (253.865977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:49.511135   22292 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:49.511302   22292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:49.511313   22292 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:49.511320   22292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:49.511597   22292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:49.511928   22292 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:49.512475   22292 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:49.512504   22292 addons.go:622] checking whether the cluster is paused
	I1206 08:30:49.512620   22292 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:49.512641   22292 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:49.513169   22292 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:49.532521   22292 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:49.532578   22292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:49.552777   22292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:49.647833   22292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:49.647905   22292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:49.678092   22292 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:30:49.678113   22292 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:49.678119   22292 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:49.678133   22292 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:49.678137   22292 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:49.678142   22292 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:49.678147   22292 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:49.678152   22292 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:49.678157   22292 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:49.678176   22292 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:49.678185   22292 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:49.678190   22292 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:49.678195   22292 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:49.678199   22292 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:49.678204   22292 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:49.678219   22292 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:49.678229   22292 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:49.678236   22292 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:49.678241   22292 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:49.678245   22292 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:49.678257   22292 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:49.678265   22292 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:49.678270   22292 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:49.678277   22292 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:49.678282   22292 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:49.678289   22292 cri.go:89] found id: ""
	I1206 08:30:49.678337   22292 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:49.692937   22292 out.go:203] 
	W1206 08:30:49.694255   22292 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:49.694283   22292 out.go:285] * 
	* 
	W1206 08:30:49.697353   22292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:49.698733   22292 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-82xzx" [fad4100b-0ed9-4362-832a-a6914265c3d1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003731777s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (238.580966ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:50.974488   22397 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:50.974624   22397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:50.974635   22397 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:50.974639   22397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:50.974858   22397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:50.975149   22397 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:50.975534   22397 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:50.975558   22397 addons.go:622] checking whether the cluster is paused
	I1206 08:30:50.975659   22397 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:50.975676   22397 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:50.976047   22397 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:50.994248   22397 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:50.994316   22397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:51.012208   22397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:51.103940   22397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:51.104031   22397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:51.134442   22397 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:30:51.134465   22397 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:51.134470   22397 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:51.134473   22397 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:51.134476   22397 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:51.134479   22397 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:51.134482   22397 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:51.134485   22397 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:51.134488   22397 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:51.134493   22397 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:51.134496   22397 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:51.134499   22397 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:51.134502   22397 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:51.134505   22397 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:51.134508   22397 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:51.134524   22397 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:51.134531   22397 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:51.134538   22397 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:51.134543   22397 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:51.134547   22397 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:51.134555   22397 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:51.134557   22397 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:51.134560   22397 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:51.134563   22397 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:51.134566   22397 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:51.134568   22397 cri.go:89] found id: ""
	I1206 08:30:51.134606   22397 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:51.148516   22397 out.go:203] 
	W1206 08:30:51.149819   22397 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:51.149843   22397 out.go:285] * 
	* 
	W1206 08:30:51.152880   22397 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:51.156701   22397 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-765040 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-765040 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [78cd1e3b-e523-4047-a2b5-88aaf189ede4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [78cd1e3b-e523-4047-a2b5-88aaf189ede4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [78cd1e3b-e523-4047-a2b5-88aaf189ede4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003138918s
addons_test.go:967: (dbg) Run:  kubectl --context addons-765040 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 ssh "cat /opt/local-path-provisioner/pvc-8427b594-15da-4c10-8bcb-5bcfaa7f5f14_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-765040 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-765040 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (294.068579ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:41.656511   21122 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:41.656658   21122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:41.656669   21122 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:41.656675   21122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:41.656949   21122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:41.657301   21122 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:41.657761   21122 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:41.657786   21122 addons.go:622] checking whether the cluster is paused
	I1206 08:30:41.657922   21122 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:41.657941   21122 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:41.658509   21122 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:41.682430   21122 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:41.682501   21122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:41.706495   21122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:41.811913   21122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:41.812009   21122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:41.847615   21122 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:41.847645   21122 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:41.847649   21122 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:41.847652   21122 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:41.847655   21122 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:41.847662   21122 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:41.847665   21122 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:41.847668   21122 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:41.847670   21122 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:41.847684   21122 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:41.847687   21122 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:41.847690   21122 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:41.847693   21122 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:41.847696   21122 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:41.847698   21122 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:41.847705   21122 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:41.847710   21122 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:41.847715   21122 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:41.847718   21122 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:41.847720   21122 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:41.847725   21122 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:41.847728   21122 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:41.847730   21122 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:41.847733   21122 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:41.847736   21122 cri.go:89] found id: ""
	I1206 08:30:41.847787   21122 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:41.865264   21122 out.go:203] 
	W1206 08:30:41.866596   21122 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:41.866627   21122 out.go:285] * 
	* 
	W1206 08:30:41.871380   21122 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:41.873009   21122 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rxnr5" [5037481f-19f2-41a8-8e3a-dc392a124155] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003711573s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (243.145818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:46.936643   21423 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:46.936916   21423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:46.936926   21423 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:46.936930   21423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:46.937172   21423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:46.937481   21423 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:46.937833   21423 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:46.937853   21423 addons.go:622] checking whether the cluster is paused
	I1206 08:30:46.937944   21423 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:46.937964   21423 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:46.938368   21423 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:46.959190   21423 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:46.959249   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:46.979484   21423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:47.073684   21423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:47.073775   21423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:47.103413   21423 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:30:47.103432   21423 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:47.103436   21423 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:47.103439   21423 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:47.103442   21423 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:47.103445   21423 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:47.103448   21423 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:47.103450   21423 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:47.103453   21423 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:47.103459   21423 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:47.103462   21423 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:47.103464   21423 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:47.103467   21423 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:47.103470   21423 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:47.103473   21423 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:47.103480   21423 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:47.103486   21423 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:47.103490   21423 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:47.103493   21423 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:47.103496   21423 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:47.103499   21423 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:47.103501   21423 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:47.103504   21423 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:47.103507   21423 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:47.103510   21423 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:47.103518   21423 cri.go:89] found id: ""
	I1206 08:30:47.103571   21423 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:47.117338   21423 out.go:203] 
	W1206 08:30:47.118675   21423 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:47.118694   21423 out.go:285] * 
	* 
	W1206 08:30:47.121697   21423 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:47.123019   21423 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-m627q" [ac25d025-94f8-4e27-bdfe-40cabd4b3ed5] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003152505s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable yakd --alsologtostderr -v=1: exit status 11 (244.045919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:45.728702   21327 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:45.728855   21327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:45.728866   21327 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:45.728872   21327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:45.729133   21327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:45.729405   21327 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:45.729733   21327 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:45.729755   21327 addons.go:622] checking whether the cluster is paused
	I1206 08:30:45.729851   21327 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:45.729872   21327 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:45.730270   21327 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:45.748433   21327 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:45.748481   21327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:45.767559   21327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:45.859628   21327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:45.859695   21327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:45.889484   21327 cri.go:89] found id: "cdba2594455eab62dc56382612f4adc17033a5127a9e49d7cfdde3550f3db5b6"
	I1206 08:30:45.889501   21327 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:45.889505   21327 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:45.889508   21327 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:45.889511   21327 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:45.889516   21327 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:45.889519   21327 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:45.889522   21327 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:45.889524   21327 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:45.889529   21327 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:45.889532   21327 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:45.889535   21327 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:45.889538   21327 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:45.889541   21327 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:45.889546   21327 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:45.889551   21327 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:45.889557   21327 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:45.889561   21327 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:45.889564   21327 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:45.889567   21327 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:45.889572   21327 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:45.889582   21327 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:45.889587   21327 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:45.889590   21327 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:45.889592   21327 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:45.889595   21327 cri.go:89] found id: ""
	I1206 08:30:45.889630   21327 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:45.903915   21327 out.go:203] 
	W1206 08:30:45.905053   21327 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:45.905068   21327 out.go:285] * 
	* 
	W1206 08:30:45.908123   21327 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:45.909369   21327 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
I1206 08:30:25.039282    9158 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-vdlbw" [510111ef-4ea7-4ce1-9f3c-c2ab122bf34a] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003368476s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-765040 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-765040 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (252.057597ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:30:31.099482   19326 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:30:31.099810   19326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:31.099821   19326 out.go:374] Setting ErrFile to fd 2...
	I1206 08:30:31.099828   19326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:30:31.100118   19326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:30:31.100388   19326 mustload.go:66] Loading cluster: addons-765040
	I1206 08:30:31.100732   19326 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:31.100746   19326 addons.go:622] checking whether the cluster is paused
	I1206 08:30:31.100827   19326 config.go:182] Loaded profile config "addons-765040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:30:31.100839   19326 host.go:66] Checking if "addons-765040" exists ...
	I1206 08:30:31.101199   19326 cli_runner.go:164] Run: docker container inspect addons-765040 --format={{.State.Status}}
	I1206 08:30:31.122480   19326 ssh_runner.go:195] Run: systemctl --version
	I1206 08:30:31.122538   19326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-765040
	I1206 08:30:31.140457   19326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/addons-765040/id_rsa Username:docker}
	I1206 08:30:31.233202   19326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:30:31.233298   19326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:30:31.265723   19326 cri.go:89] found id: "fa9f9971a7530cf4b89bb2ac81dc937a46c21b0193a7dece9e1ee7e71a0731a7"
	I1206 08:30:31.265748   19326 cri.go:89] found id: "b9fb9ebbc4e818af31582ba8689c151e0f5025be0fd1e7f8e13a87328c2f0983"
	I1206 08:30:31.265755   19326 cri.go:89] found id: "673c8f827d57b1be9837cfaf5c37d75c24380211c1c7979fe473f482da5e66eb"
	I1206 08:30:31.265760   19326 cri.go:89] found id: "4396f604e4eceae8832bad236ce32f91e1cf18f520602fcc96b4df92041596db"
	I1206 08:30:31.265764   19326 cri.go:89] found id: "1be3dd273f2cc0632f59ae355c560a90bb1150a3cddcdee2e5b52f58c7901b66"
	I1206 08:30:31.265769   19326 cri.go:89] found id: "885f97664324bae2e77297728e52e115d15826eef985f7ef3ec65fa02e35173f"
	I1206 08:30:31.265774   19326 cri.go:89] found id: "d68a7f31cdd6c174d9b7f423afd108bcd05b131e09593e6d214ef07a3a000eba"
	I1206 08:30:31.265778   19326 cri.go:89] found id: "6a470b7cce4a20486509ca93e2e4e08d6429d19f9ca786117b0e3f1e5015e2d3"
	I1206 08:30:31.265782   19326 cri.go:89] found id: "6ee534b6b02671106dc54be014137b1f944664cb6e378d70bab65e38d41eb63b"
	I1206 08:30:31.265794   19326 cri.go:89] found id: "94882f78bdf8ae77980f53246ed2a4340934cb283ae673bee9543eb5a2bf6b41"
	I1206 08:30:31.265798   19326 cri.go:89] found id: "354d0a526c4195aa70ed9b04b55a337193a7035af696768a58aebda0627b0dfa"
	I1206 08:30:31.265802   19326 cri.go:89] found id: "d077eaf71426e870d54080516034db841ff0efc0da1d13bd0c27b4520f3affb8"
	I1206 08:30:31.265806   19326 cri.go:89] found id: "698b578cb2d855cad5995936f292da0ed14e17747624640b062973edb1d92171"
	I1206 08:30:31.265810   19326 cri.go:89] found id: "0cb647e368899faa7ab3c1a75b0854f46e01db42059e952b8298d22a95f8b338"
	I1206 08:30:31.265815   19326 cri.go:89] found id: "1e8b8db988d1b8aa90c2eb1ed8288bcd016d20ccdf91d9cc2c8774ef2d9437b8"
	I1206 08:30:31.265832   19326 cri.go:89] found id: "18c6142b8428d05e882e005726225d63f757930152d6055259789fce8929cad4"
	I1206 08:30:31.265844   19326 cri.go:89] found id: "248f63d58002cb86c78a9904986fab734ffbcd5cf3590c79339386c76d5b7762"
	I1206 08:30:31.265850   19326 cri.go:89] found id: "7f04ddcba299d8ee1cf548236de3f5750bd1e2c09155438ce5e96f6d4766a2f5"
	I1206 08:30:31.265854   19326 cri.go:89] found id: "88a0bf3b6769d7ed49515c828f892accb4e6b75061cc83a8b4c6a890f52a954f"
	I1206 08:30:31.265858   19326 cri.go:89] found id: "c9ca4911d0b8ae1a47ec64ea2bb0d25d909673bc1ae722beb8cb03712748fdbc"
	I1206 08:30:31.265864   19326 cri.go:89] found id: "ac0e422b4a2482ef7e4c200d1f80fdfbca7d07e9f6094e04519f217dbc9ba685"
	I1206 08:30:31.265875   19326 cri.go:89] found id: "9164f996c22b80baedabdc4f0bffaba2f6d73d0ed0cbf9e9263a1a3e82d6c093"
	I1206 08:30:31.265880   19326 cri.go:89] found id: "c72849d4fdd71560d1d783cfa72c1cce047bd925a1842502d6a42a5433717eda"
	I1206 08:30:31.265888   19326 cri.go:89] found id: "a5a7b4678c49d3b84857b83617953f4fa150c368c664fc88dbcdd8ff236af31d"
	I1206 08:30:31.265893   19326 cri.go:89] found id: ""
	I1206 08:30:31.265941   19326 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 08:30:31.280367   19326 out.go:203] 
	W1206 08:30:31.281957   19326 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 08:30:31.282103   19326 out.go:285] * 
	* 
	W1206 08:30:31.287520   19326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:30:31.288729   19326 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-765040 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.21s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-632983 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-632983 --output=json --user=testUser: exit status 80 (2.205790567s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c9a5c4df-246f-4b6a-8298-e291b132dbe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-632983 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"01c45e96-0d52-43c7-9b1a-cf421fc1f031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T08:50:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b1c4b6d9-04f1-4536-9381-cbbfec023fe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-632983 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.21s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-632983 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-632983 --output=json --user=testUser: exit status 80 (1.718561835s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b3ef33a8-9415-4cbc-9dc9-7d18de544955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-632983 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1dd73de7-0953-4438-ad61-6734a62a3e6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T08:50:03Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"4b5247dd-8ee0-4dd8-938f-b7efa7c19395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-632983 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.72s)

                                                
                                    
x
+
TestPause/serial/Pause (6.57s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-845581 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-845581 --alsologtostderr -v=5: exit status 80 (2.612578938s)

                                                
                                                
-- stdout --
	* Pausing node pause-845581 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:03:32.592831  210477 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:03:32.597049  210477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:03:32.597088  210477 out.go:374] Setting ErrFile to fd 2...
	I1206 09:03:32.597095  210477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:03:32.597716  210477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:03:32.598109  210477 out.go:368] Setting JSON to false
	I1206 09:03:32.598145  210477 mustload.go:66] Loading cluster: pause-845581
	I1206 09:03:32.598706  210477 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:32.599321  210477 cli_runner.go:164] Run: docker container inspect pause-845581 --format={{.State.Status}}
	I1206 09:03:32.629507  210477 host.go:66] Checking if "pause-845581" exists ...
	I1206 09:03:32.630071  210477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:03:32.732614  210477 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:03:32.719709681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:03:32.733928  210477 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-845581 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:03:32.735954  210477 out.go:179] * Pausing node pause-845581 ... 
	I1206 09:03:32.737356  210477 host.go:66] Checking if "pause-845581" exists ...
	I1206 09:03:32.737701  210477 ssh_runner.go:195] Run: systemctl --version
	I1206 09:03:32.737757  210477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:32.762056  210477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:32.865726  210477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:03:32.882746  210477 pause.go:52] kubelet running: true
	I1206 09:03:32.882918  210477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:03:33.112215  210477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:03:33.112409  210477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:03:33.227691  210477 cri.go:89] found id: "aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8"
	I1206 09:03:33.227746  210477 cri.go:89] found id: "eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0"
	I1206 09:03:33.227753  210477 cri.go:89] found id: "1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252"
	I1206 09:03:33.227759  210477 cri.go:89] found id: "f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa"
	I1206 09:03:33.227763  210477 cri.go:89] found id: "83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce"
	I1206 09:03:33.227767  210477 cri.go:89] found id: "3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3"
	I1206 09:03:33.227771  210477 cri.go:89] found id: "0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233"
	I1206 09:03:33.227776  210477 cri.go:89] found id: ""
	I1206 09:03:33.227836  210477 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:03:33.241094  210477 retry.go:31] will retry after 288.91321ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:33Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:03:33.530495  210477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:03:33.549824  210477 pause.go:52] kubelet running: false
	I1206 09:03:33.549946  210477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:03:33.726244  210477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:03:33.726356  210477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:03:33.808546  210477 cri.go:89] found id: "aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8"
	I1206 09:03:33.808575  210477 cri.go:89] found id: "eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0"
	I1206 09:03:33.808581  210477 cri.go:89] found id: "1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252"
	I1206 09:03:33.808587  210477 cri.go:89] found id: "f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa"
	I1206 09:03:33.808591  210477 cri.go:89] found id: "83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce"
	I1206 09:03:33.808596  210477 cri.go:89] found id: "3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3"
	I1206 09:03:33.808600  210477 cri.go:89] found id: "0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233"
	I1206 09:03:33.808605  210477 cri.go:89] found id: ""
	I1206 09:03:33.808653  210477 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:03:33.821340  210477 retry.go:31] will retry after 383.978973ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:33Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:03:34.205954  210477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:03:34.220735  210477 pause.go:52] kubelet running: false
	I1206 09:03:34.220800  210477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:03:34.356514  210477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:03:34.356604  210477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:03:34.436514  210477 cri.go:89] found id: "aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8"
	I1206 09:03:34.436538  210477 cri.go:89] found id: "eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0"
	I1206 09:03:34.436542  210477 cri.go:89] found id: "1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252"
	I1206 09:03:34.436555  210477 cri.go:89] found id: "f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa"
	I1206 09:03:34.436563  210477 cri.go:89] found id: "83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce"
	I1206 09:03:34.436566  210477 cri.go:89] found id: "3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3"
	I1206 09:03:34.436570  210477 cri.go:89] found id: "0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233"
	I1206 09:03:34.436574  210477 cri.go:89] found id: ""
	I1206 09:03:34.436622  210477 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:03:34.450924  210477 retry.go:31] will retry after 398.526655ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:34Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:03:34.850555  210477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:03:34.865891  210477 pause.go:52] kubelet running: false
	I1206 09:03:34.865952  210477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:03:35.010590  210477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:03:35.010724  210477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:03:35.097093  210477 cri.go:89] found id: "aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8"
	I1206 09:03:35.097117  210477 cri.go:89] found id: "eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0"
	I1206 09:03:35.097124  210477 cri.go:89] found id: "1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252"
	I1206 09:03:35.097128  210477 cri.go:89] found id: "f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa"
	I1206 09:03:35.097133  210477 cri.go:89] found id: "83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce"
	I1206 09:03:35.097138  210477 cri.go:89] found id: "3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3"
	I1206 09:03:35.097142  210477 cri.go:89] found id: "0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233"
	I1206 09:03:35.097147  210477 cri.go:89] found id: ""
	I1206 09:03:35.097195  210477 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:03:35.113168  210477 out.go:203] 
	W1206 09:03:35.114775  210477 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:03:35.114805  210477 out.go:285] * 
	* 
	W1206 09:03:35.119202  210477 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:03:35.120623  210477 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-845581 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-845581
helpers_test.go:243: (dbg) docker inspect pause-845581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca",
	        "Created": "2025-12-06T09:02:51.516470191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:02:51.565506381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/hosts",
	        "LogPath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca-json.log",
	        "Name": "/pause-845581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-845581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-845581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca",
	                "LowerDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-845581",
	                "Source": "/var/lib/docker/volumes/pause-845581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-845581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-845581",
	                "name.minikube.sigs.k8s.io": "pause-845581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5aef11461cacedaa8351eb09a068f4dea864d4e3338e79691743012ada6e6d85",
	            "SandboxKey": "/var/run/docker/netns/5aef11461cac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-845581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5e81e2c30e63a7438fc13729f8bc264fa4f779a1240d65d428ce42a49c15327c",
	                    "EndpointID": "4fb94564c302704ab454a36d856c03f731b48339a25f3372a14929e7844faf2e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "0e:3d:68:c6:bf:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-845581",
	                        "3296925cd4fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-845581 -n pause-845581
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-845581 -n pause-845581: exit status 2 (349.104137ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-845581 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-845581 logs -n 25: (1.064041765s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-735357 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                          │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:01 UTC │
	│ stop    │ -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr                                                                                                                                                             │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr                                                                                                                                                             │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr                                                                                                                                                             │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │ 06 Dec 25 09:01 UTC │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │ 06 Dec 25 09:02 UTC │
	│ delete  │ -p scheduled-stop-735357                                                                                                                                                                                                  │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ start   │ -p insufficient-storage-423078 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-423078 │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │                     │
	│ delete  │ -p insufficient-storage-423078                                                                                                                                                                                            │ insufficient-storage-423078 │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ start   │ -p force-systemd-flag-124894 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-124894   │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p offline-crio-829666 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-829666         │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │                     │
	│ start   │ -p pause-845581 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-845581                │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p force-systemd-env-894703 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-894703    │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:03 UTC │
	│ ssh     │ force-systemd-flag-124894 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-124894   │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ delete  │ -p force-systemd-flag-124894                                                                                                                                                                                              │ force-systemd-flag-124894   │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p cert-expiration-006207 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-006207      │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │                     │
	│ delete  │ -p force-systemd-env-894703                                                                                                                                                                                               │ force-systemd-env-894703    │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p cert-options-011599 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-011599         │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │                     │
	│ start   │ -p pause-845581 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-845581                │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ pause   │ -p pause-845581 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-845581                │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:03:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:03:26.360049  209035 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:03:26.360326  209035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:03:26.360337  209035 out.go:374] Setting ErrFile to fd 2...
	I1206 09:03:26.360343  209035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:03:26.360634  209035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:03:26.361165  209035 out.go:368] Setting JSON to false
	I1206 09:03:26.362621  209035 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2757,"bootTime":1765009049,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:03:26.362691  209035 start.go:143] virtualization: kvm guest
	I1206 09:03:26.364720  209035 out.go:179] * [pause-845581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:03:26.366196  209035 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:03:26.366195  209035 notify.go:221] Checking for updates...
	I1206 09:03:26.367687  209035 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:03:26.369084  209035 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:03:26.370300  209035 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:03:26.371625  209035 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:03:26.372834  209035 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:03:26.375289  209035 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:26.375931  209035 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:03:26.401805  209035 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:03:26.401894  209035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:03:26.467828  209035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:03:26.455458844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:03:26.467956  209035 docker.go:319] overlay module found
	I1206 09:03:26.470309  209035 out.go:179] * Using the docker driver based on existing profile
	I1206 09:03:26.474168  209035 start.go:309] selected driver: docker
	I1206 09:03:26.474200  209035 start.go:927] validating driver "docker" against &{Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:26.474352  209035 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:03:26.474471  209035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:03:26.543627  209035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:03:26.533885971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:03:26.544464  209035 cni.go:84] Creating CNI manager for ""
	I1206 09:03:26.544547  209035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:26.544611  209035 start.go:353] cluster config:
	{Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:26.546938  209035 out.go:179] * Starting "pause-845581" primary control-plane node in "pause-845581" cluster
	I1206 09:03:26.548845  209035 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:03:26.550201  209035 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:03:26.551266  209035 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:03:26.551308  209035 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:03:26.551321  209035 cache.go:65] Caching tarball of preloaded images
	I1206 09:03:26.551370  209035 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:03:26.551408  209035 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:03:26.551418  209035 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:03:26.551597  209035 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/config.json ...
	I1206 09:03:26.575185  209035 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:03:26.575205  209035 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:03:26.575226  209035 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:03:26.575258  209035 start.go:360] acquireMachinesLock for pause-845581: {Name:mk83e33767982839af7cff5ab6a30e1596ccbe89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:03:26.575334  209035 start.go:364] duration metric: took 44.023µs to acquireMachinesLock for "pause-845581"
	I1206 09:03:26.575354  209035 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:03:26.575364  209035 fix.go:54] fixHost starting: 
	I1206 09:03:26.575659  209035 cli_runner.go:164] Run: docker container inspect pause-845581 --format={{.State.Status}}
	I1206 09:03:26.595468  209035 fix.go:112] recreateIfNeeded on pause-845581: state=Running err=<nil>
	W1206 09:03:26.595509  209035 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:03:25.560083  205517 cli_runner.go:164] Run: docker network inspect cert-options-011599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:03:25.577977  205517 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:03:25.582100  205517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:03:25.592383  205517 kubeadm.go:884] updating cluster {Name:cert-options-011599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-011599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:03:25.592531  205517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:03:25.592584  205517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:25.624926  205517 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:25.624939  205517 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:03:25.625000  205517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:25.650376  205517 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:25.650394  205517 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:03:25.650401  205517 kubeadm.go:935] updating node { 192.168.94.2 8555 v1.34.2 crio true true} ...
	I1206 09:03:25.650494  205517 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-options-011599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-options-011599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:03:25.650570  205517 ssh_runner.go:195] Run: crio config
	I1206 09:03:25.696010  205517 cni.go:84] Creating CNI manager for ""
	I1206 09:03:25.696028  205517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:25.696045  205517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:03:25.696151  205517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8555 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-011599 NodeName:cert-options-011599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:03:25.696350  205517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-011599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:03:25.696439  205517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:03:25.704763  205517 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:03:25.704837  205517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:03:25.713520  205517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1206 09:03:25.727585  205517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:03:25.744306  205517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1206 09:03:25.757132  205517 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:03:25.760832  205517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:03:25.771325  205517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:25.853953  205517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:25.875410  205517 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599 for IP: 192.168.94.2
	I1206 09:03:25.875430  205517 certs.go:195] generating shared ca certs ...
	I1206 09:03:25.875447  205517 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:25.875598  205517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:03:25.875631  205517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:03:25.875637  205517 certs.go:257] generating profile certs ...
	I1206 09:03:25.875692  205517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.key
	I1206 09:03:25.875700  205517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.crt with IP's: []
	I1206 09:03:25.959859  205517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.crt ...
	I1206 09:03:25.959874  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.crt: {Name:mk26f380aa9c4f22a560e9774cdcf1301eb7748c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:25.960070  205517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.key ...
	I1206 09:03:25.960081  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.key: {Name:mkdf90b5ef0cd13798706908b50265546f883bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:25.960175  205517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a
	I1206 09:03:25.960185  205517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:03:26.024250  205517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a ...
	I1206 09:03:26.024265  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a: {Name:mkdc27904be0d380fe8cb1eb4c6d6ffdceb9adda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.024424  205517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a ...
	I1206 09:03:26.024432  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a: {Name:mk7da19ae9c476dca53b275364ef8aa1f49ddad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.024504  205517 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt
	I1206 09:03:26.024570  205517 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key
	I1206 09:03:26.024616  205517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key
	I1206 09:03:26.024628  205517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt with IP's: []
	I1206 09:03:26.051882  205517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt ...
	I1206 09:03:26.051897  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt: {Name:mkf564043675ccee8ed8d5914cc4a0da3d23cf19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.052071  205517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key ...
	I1206 09:03:26.052084  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key: {Name:mk53609c91a7058ef7c0bff8e83ac9fa02542e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.052306  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:03:26.052340  205517 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:03:26.052346  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:03:26.052370  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:03:26.052397  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:03:26.052435  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:03:26.052474  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:03:26.053023  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:03:26.071904  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:03:26.089458  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:03:26.106242  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:03:26.123716  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I1206 09:03:26.141166  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:03:26.158543  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:03:26.176552  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:03:26.194351  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:03:26.213953  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:03:26.233119  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:03:26.252366  205517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:03:26.266812  205517 ssh_runner.go:195] Run: openssl version
	I1206 09:03:26.273549  205517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.281782  205517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:03:26.291153  205517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.298764  205517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.298831  205517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.342053  205517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:03:26.350397  205517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:03:26.358958  205517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.367012  205517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:03:26.375134  205517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.379484  205517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.379535  205517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.424289  205517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:03:26.434963  205517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:03:26.444254  205517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.452680  205517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:03:26.462485  205517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.467672  205517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.467726  205517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.518096  205517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:03:26.526463  205517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:03:26.534823  205517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:03:26.539088  205517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:03:26.539147  205517 kubeadm.go:401] StartCluster: {Name:cert-options-011599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-011599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:26.539229  205517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:03:26.539291  205517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:03:26.570754  205517 cri.go:89] found id: ""
	I1206 09:03:26.570826  205517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:03:26.579929  205517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:03:26.588682  205517 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:03:26.588723  205517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:03:26.597182  205517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:03:26.597191  205517 kubeadm.go:158] found existing configuration files:
	
	I1206 09:03:26.597230  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I1206 09:03:26.605210  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:03:26.605261  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:03:26.613642  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I1206 09:03:26.621661  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:03:26.621715  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:03:26.629405  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I1206 09:03:26.637402  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:03:26.637445  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:03:26.645248  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I1206 09:03:26.652718  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:03:26.652757  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:03:26.660865  205517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:03:26.699894  205517 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:03:26.699938  205517 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:03:26.721271  205517 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:03:26.721321  205517 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:03:26.721347  205517 kubeadm.go:319] OS: Linux
	I1206 09:03:26.721381  205517 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:03:26.721419  205517 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:03:26.721455  205517 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:03:26.721491  205517 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:03:26.721526  205517 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:03:26.721566  205517 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:03:26.721604  205517 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:03:26.721636  205517 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:03:26.786939  205517 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:03:26.787135  205517 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:03:26.787282  205517 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:03:26.795542  205517 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:03:26.797494  205517 out.go:252]   - Generating certificates and keys ...
	I1206 09:03:26.797575  205517 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:03:26.797644  205517 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:03:27.106501  205517 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:03:27.490407  205517 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:03:25.184788  203960 out.go:252]   - Generating certificates and keys ...
	I1206 09:03:25.184896  203960 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:03:25.185009  203960 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:03:25.340784  203960 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:03:25.514196  203960 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:03:25.818314  203960 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:03:26.230091  203960 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:03:26.439733  203960 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:03:26.439960  203960 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-006207 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:03:26.773805  203960 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:03:26.774294  203960 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-006207 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:03:27.076500  203960 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:03:27.116428  203960 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:03:27.479530  203960 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:03:27.479758  203960 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:03:27.631774  203960 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:03:26.597665  209035 out.go:252] * Updating the running docker "pause-845581" container ...
	I1206 09:03:26.597706  209035 machine.go:94] provisionDockerMachine start ...
	I1206 09:03:26.597760  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:26.616277  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:26.616530  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:26.616551  209035 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:03:26.747780  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-845581
	
	I1206 09:03:26.747808  209035 ubuntu.go:182] provisioning hostname "pause-845581"
	I1206 09:03:26.747884  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:26.770732  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:26.771077  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:26.771096  209035 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-845581 && echo "pause-845581" | sudo tee /etc/hostname
	I1206 09:03:26.911861  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-845581
	
	I1206 09:03:26.911973  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:26.933108  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:26.933314  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:26.933333  209035 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-845581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-845581/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-845581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:03:27.062732  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:03:27.062761  209035 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:03:27.062804  209035 ubuntu.go:190] setting up certificates
	I1206 09:03:27.062815  209035 provision.go:84] configureAuth start
	I1206 09:03:27.062879  209035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845581
	I1206 09:03:27.084783  209035 provision.go:143] copyHostCerts
	I1206 09:03:27.084864  209035 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:03:27.084878  209035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:03:27.084956  209035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:03:27.085089  209035 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:03:27.085101  209035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:03:27.085137  209035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:03:27.085237  209035 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:03:27.085246  209035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:03:27.085278  209035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:03:27.085357  209035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.pause-845581 san=[127.0.0.1 192.168.103.2 localhost minikube pause-845581]
	I1206 09:03:27.306973  209035 provision.go:177] copyRemoteCerts
	I1206 09:03:27.307040  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:03:27.307071  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:27.326172  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:27.420389  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:03:27.438669  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:03:27.458208  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:03:27.478826  209035 provision.go:87] duration metric: took 415.993754ms to configureAuth
	I1206 09:03:27.478862  209035 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:03:27.479140  209035 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:27.479271  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:27.502696  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:27.502949  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:27.502969  209035 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:03:27.839305  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:03:27.839334  209035 machine.go:97] duration metric: took 1.2416207s to provisionDockerMachine
	I1206 09:03:27.839347  209035 start.go:293] postStartSetup for "pause-845581" (driver="docker")
	I1206 09:03:27.839359  209035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:03:27.839436  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:03:27.839498  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:27.859095  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:27.954264  209035 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:03:27.958673  209035 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:03:27.958699  209035 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:03:27.958711  209035 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:03:27.958779  209035 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:03:27.958874  209035 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:03:27.959019  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:03:27.967824  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:03:27.989736  209035 start.go:296] duration metric: took 150.37265ms for postStartSetup
	I1206 09:03:27.989817  209035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:03:27.989865  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:28.010351  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:28.103954  209035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:03:28.109415  209035 fix.go:56] duration metric: took 1.534045036s for fixHost
	I1206 09:03:28.109446  209035 start.go:83] releasing machines lock for "pause-845581", held for 1.534100146s
	I1206 09:03:28.109522  209035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845581
	I1206 09:03:28.129326  209035 ssh_runner.go:195] Run: cat /version.json
	I1206 09:03:28.129387  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:28.129432  209035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:03:28.129524  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:28.150862  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:28.151574  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:28.301937  209035 ssh_runner.go:195] Run: systemctl --version
	I1206 09:03:28.308735  209035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:03:28.345933  209035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:03:28.350982  209035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:03:28.351071  209035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:03:28.359653  209035 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:03:28.359680  209035 start.go:496] detecting cgroup driver to use...
	I1206 09:03:28.359709  209035 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:03:28.359743  209035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:03:28.374642  209035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:03:28.387742  209035 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:03:28.387789  209035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:03:28.403749  209035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:03:28.417728  209035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:03:28.557242  209035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:03:28.700910  209035 docker.go:234] disabling docker service ...
	I1206 09:03:28.701003  209035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:03:28.717237  209035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:03:28.733697  209035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:03:28.858424  209035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:03:28.979705  209035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:03:28.995098  209035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:03:29.010956  209035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:03:29.011032  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.020271  209035 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:03:29.020330  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.029397  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.038565  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.047430  209035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:03:29.055736  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.065317  209035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.074184  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.083616  209035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:03:29.091078  209035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:03:29.098592  209035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:29.210329  209035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:03:29.413711  209035 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:03:29.413776  209035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:03:29.418103  209035 start.go:564] Will wait 60s for crictl version
	I1206 09:03:29.418161  209035 ssh_runner.go:195] Run: which crictl
	I1206 09:03:29.421890  209035 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:03:29.447793  209035 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:03:29.447873  209035 ssh_runner.go:195] Run: crio --version
	I1206 09:03:29.478884  209035 ssh_runner.go:195] Run: crio --version
	I1206 09:03:29.514368  209035 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:03:27.999839  205517 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:03:28.088028  205517 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:03:28.245547  205517 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:03:28.245808  205517 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-options-011599 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:03:28.685093  205517 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:03:28.685248  205517 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-options-011599 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:03:28.834950  205517 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:03:29.085664  205517 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:03:29.163957  205517 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:03:29.164072  205517 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:03:29.282275  205517 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:03:29.327978  205517 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:03:29.399666  205517 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:03:29.521565  205517 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:03:29.819713  205517 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:03:29.820221  205517 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:03:29.824038  205517 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:03:28.920245  203960 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:03:29.173304  203960 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:03:29.713682  203960 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:03:29.991146  203960 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:03:29.991915  203960 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:03:29.996286  203960 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:03:25.283092  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	W1206 09:03:27.782657  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	I1206 09:03:29.515651  209035 cli_runner.go:164] Run: docker network inspect pause-845581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:03:29.535659  209035 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:03:29.539938  209035 kubeadm.go:884] updating cluster {Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:03:29.540110  209035 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:03:29.540166  209035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:29.575775  209035 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:29.575794  209035 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:03:29.575838  209035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:29.602053  209035 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:29.602076  209035 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:03:29.602084  209035 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1206 09:03:29.602182  209035 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-845581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:03:29.602254  209035 ssh_runner.go:195] Run: crio config
	I1206 09:03:29.654124  209035 cni.go:84] Creating CNI manager for ""
	I1206 09:03:29.654146  209035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:29.654162  209035 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:03:29.654188  209035 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-845581 NodeName:pause-845581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:03:29.654356  209035 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-845581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:03:29.654427  209035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:03:29.662880  209035 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:03:29.662935  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:03:29.671252  209035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 09:03:29.684189  209035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:03:29.697894  209035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1206 09:03:29.712889  209035 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:03:29.717720  209035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:29.847064  209035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:29.861945  209035 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581 for IP: 192.168.103.2
	I1206 09:03:29.861969  209035 certs.go:195] generating shared ca certs ...
	I1206 09:03:29.862016  209035 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:29.862161  209035 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:03:29.862222  209035 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:03:29.862238  209035 certs.go:257] generating profile certs ...
	I1206 09:03:29.862348  209035 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.key
	I1206 09:03:29.862445  209035 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/apiserver.key.133c68b5
	I1206 09:03:29.862504  209035 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/proxy-client.key
	I1206 09:03:29.862630  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:03:29.862677  209035 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:03:29.862692  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:03:29.862732  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:03:29.862768  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:03:29.862803  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:03:29.862860  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:03:29.863509  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:03:29.888298  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:03:29.906357  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:03:29.924155  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:03:29.941804  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:03:29.960813  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:03:29.979325  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:03:30.000762  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:03:30.019277  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:03:30.037718  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:03:30.061205  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:03:30.080390  209035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:03:30.093403  209035 ssh_runner.go:195] Run: openssl version
	I1206 09:03:30.099416  209035 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.107030  209035 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:03:30.114724  209035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.118467  209035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.118516  209035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.161802  209035 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:03:30.170577  209035 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.179753  209035 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:03:30.188179  209035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.193743  209035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.193806  209035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.229368  209035 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:03:30.237532  209035 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.245243  209035 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:03:30.253773  209035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.257934  209035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.258017  209035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.293733  209035 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:03:30.301734  209035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:03:30.305646  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:03:30.340932  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:03:30.376546  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:03:30.416447  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:03:30.464776  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:03:30.500266  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:03:30.534941  209035 kubeadm.go:401] StartCluster: {Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:30.535113  209035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:03:30.535169  209035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:03:30.565689  209035 cri.go:89] found id: "aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8"
	I1206 09:03:30.565713  209035 cri.go:89] found id: "eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0"
	I1206 09:03:30.565718  209035 cri.go:89] found id: "1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252"
	I1206 09:03:30.565724  209035 cri.go:89] found id: "f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa"
	I1206 09:03:30.565728  209035 cri.go:89] found id: "83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce"
	I1206 09:03:30.565732  209035 cri.go:89] found id: "3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3"
	I1206 09:03:30.565737  209035 cri.go:89] found id: "0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233"
	I1206 09:03:30.565742  209035 cri.go:89] found id: ""
	I1206 09:03:30.565785  209035 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:03:30.577648  209035 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:30Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:03:30.577704  209035 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:03:30.585619  209035 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:03:30.585640  209035 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:03:30.585685  209035 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:03:30.593576  209035 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:03:30.594427  209035 kubeconfig.go:125] found "pause-845581" server: "https://192.168.103.2:8443"
	I1206 09:03:30.595545  209035 kapi.go:59] client config for pause-845581: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:03:30.596081  209035 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 09:03:30.596101  209035 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 09:03:30.596111  209035 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 09:03:30.596117  209035 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 09:03:30.596124  209035 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 09:03:30.596524  209035 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:03:30.604849  209035 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1206 09:03:30.604878  209035 kubeadm.go:602] duration metric: took 19.23166ms to restartPrimaryControlPlane
	I1206 09:03:30.604887  209035 kubeadm.go:403] duration metric: took 69.958239ms to StartCluster
	I1206 09:03:30.604904  209035 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:30.604968  209035 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:03:30.606133  209035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:30.606425  209035 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:03:30.606542  209035 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:03:30.606632  209035 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:30.608211  209035 out.go:179] * Enabled addons: 
	I1206 09:03:30.608226  209035 out.go:179] * Verifying Kubernetes components...
	I1206 09:03:30.609470  209035 addons.go:530] duration metric: took 2.934726ms for enable addons: enabled=[]
	I1206 09:03:30.609498  209035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:30.716604  209035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:30.730000  209035 node_ready.go:35] waiting up to 6m0s for node "pause-845581" to be "Ready" ...
	I1206 09:03:30.737520  209035 node_ready.go:49] node "pause-845581" is "Ready"
	I1206 09:03:30.737544  209035 node_ready.go:38] duration metric: took 7.500786ms for node "pause-845581" to be "Ready" ...
	I1206 09:03:30.737557  209035 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:03:30.737600  209035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:03:30.749715  209035 api_server.go:72] duration metric: took 143.255552ms to wait for apiserver process to appear ...
	I1206 09:03:30.749738  209035 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:03:30.749755  209035 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:03:30.754487  209035 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1206 09:03:30.755489  209035 api_server.go:141] control plane version: v1.34.2
	I1206 09:03:30.755518  209035 api_server.go:131] duration metric: took 5.773967ms to wait for apiserver health ...
	I1206 09:03:30.755529  209035 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:03:30.758958  209035 system_pods.go:59] 7 kube-system pods found
	I1206 09:03:30.759028  209035 system_pods.go:61] "coredns-66bc5c9577-txc4m" [b6adb37f-ed13-4c7b-b443-6c918b25c752] Running
	I1206 09:03:30.759043  209035 system_pods.go:61] "etcd-pause-845581" [08ae8880-2ddc-47b6-ab8b-d9f523cdaef6] Running
	I1206 09:03:30.759048  209035 system_pods.go:61] "kindnet-z5h5d" [b45c72b9-d95b-4226-a8f5-e4c45609d742] Running
	I1206 09:03:30.759055  209035 system_pods.go:61] "kube-apiserver-pause-845581" [23ac1868-3d81-4ab2-81a6-9b4656fa7798] Running
	I1206 09:03:30.759059  209035 system_pods.go:61] "kube-controller-manager-pause-845581" [b627fac7-11a5-4470-ae22-f256286ec572] Running
	I1206 09:03:30.759063  209035 system_pods.go:61] "kube-proxy-qw24c" [6e3bfb60-eb08-406f-ba98-8595995bc552] Running
	I1206 09:03:30.759067  209035 system_pods.go:61] "kube-scheduler-pause-845581" [5002f38a-f3e3-4b76-9b6e-3f59303b96b4] Running
	I1206 09:03:30.759076  209035 system_pods.go:74] duration metric: took 3.540471ms to wait for pod list to return data ...
	I1206 09:03:30.759085  209035 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:03:30.761085  209035 default_sa.go:45] found service account: "default"
	I1206 09:03:30.761105  209035 default_sa.go:55] duration metric: took 2.011465ms for default service account to be created ...
	I1206 09:03:30.761115  209035 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:03:30.763357  209035 system_pods.go:86] 7 kube-system pods found
	I1206 09:03:30.763379  209035 system_pods.go:89] "coredns-66bc5c9577-txc4m" [b6adb37f-ed13-4c7b-b443-6c918b25c752] Running
	I1206 09:03:30.763391  209035 system_pods.go:89] "etcd-pause-845581" [08ae8880-2ddc-47b6-ab8b-d9f523cdaef6] Running
	I1206 09:03:30.763399  209035 system_pods.go:89] "kindnet-z5h5d" [b45c72b9-d95b-4226-a8f5-e4c45609d742] Running
	I1206 09:03:30.763403  209035 system_pods.go:89] "kube-apiserver-pause-845581" [23ac1868-3d81-4ab2-81a6-9b4656fa7798] Running
	I1206 09:03:30.763407  209035 system_pods.go:89] "kube-controller-manager-pause-845581" [b627fac7-11a5-4470-ae22-f256286ec572] Running
	I1206 09:03:30.763428  209035 system_pods.go:89] "kube-proxy-qw24c" [6e3bfb60-eb08-406f-ba98-8595995bc552] Running
	I1206 09:03:30.763432  209035 system_pods.go:89] "kube-scheduler-pause-845581" [5002f38a-f3e3-4b76-9b6e-3f59303b96b4] Running
	I1206 09:03:30.763437  209035 system_pods.go:126] duration metric: took 2.317227ms to wait for k8s-apps to be running ...
	I1206 09:03:30.763443  209035 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:03:30.763481  209035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:03:30.780926  209035 system_svc.go:56] duration metric: took 17.473605ms WaitForService to wait for kubelet
	I1206 09:03:30.780959  209035 kubeadm.go:587] duration metric: took 174.502449ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:03:30.780979  209035 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:03:30.786736  209035 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:03:30.786769  209035 node_conditions.go:123] node cpu capacity is 8
	I1206 09:03:30.786786  209035 node_conditions.go:105] duration metric: took 5.8ms to run NodePressure ...
	I1206 09:03:30.786804  209035 start.go:242] waiting for startup goroutines ...
	I1206 09:03:30.786821  209035 start.go:247] waiting for cluster config update ...
	I1206 09:03:30.786836  209035 start.go:256] writing updated cluster config ...
	I1206 09:03:30.787232  209035 ssh_runner.go:195] Run: rm -f paused
	I1206 09:03:30.791430  209035 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:03:30.792227  209035 kapi.go:59] client config for pause-845581: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:03:30.795205  209035 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-txc4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.799615  209035 pod_ready.go:94] pod "coredns-66bc5c9577-txc4m" is "Ready"
	I1206 09:03:30.799635  209035 pod_ready.go:86] duration metric: took 4.411305ms for pod "coredns-66bc5c9577-txc4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.801542  209035 pod_ready.go:83] waiting for pod "etcd-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.805310  209035 pod_ready.go:94] pod "etcd-pause-845581" is "Ready"
	I1206 09:03:30.805328  209035 pod_ready.go:86] duration metric: took 3.768169ms for pod "etcd-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.807181  209035 pod_ready.go:83] waiting for pod "kube-apiserver-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.810336  209035 pod_ready.go:94] pod "kube-apiserver-pause-845581" is "Ready"
	I1206 09:03:30.810356  209035 pod_ready.go:86] duration metric: took 3.157807ms for pod "kube-apiserver-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.811941  209035 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.195296  209035 pod_ready.go:94] pod "kube-controller-manager-pause-845581" is "Ready"
	I1206 09:03:31.195324  209035 pod_ready.go:86] duration metric: took 383.363695ms for pod "kube-controller-manager-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.397093  209035 pod_ready.go:83] waiting for pod "kube-proxy-qw24c" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.796541  209035 pod_ready.go:94] pod "kube-proxy-qw24c" is "Ready"
	I1206 09:03:31.796571  209035 pod_ready.go:86] duration metric: took 399.446593ms for pod "kube-proxy-qw24c" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.996763  209035 pod_ready.go:83] waiting for pod "kube-scheduler-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:32.397751  209035 pod_ready.go:94] pod "kube-scheduler-pause-845581" is "Ready"
	I1206 09:03:32.397778  209035 pod_ready.go:86] duration metric: took 400.989902ms for pod "kube-scheduler-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:32.397792  209035 pod_ready.go:40] duration metric: took 1.606323876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:03:32.465585  209035 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:03:32.467622  209035 out.go:179] * Done! kubectl is now configured to use "pause-845581" cluster and "default" namespace by default
	I1206 09:03:29.825908  205517 out.go:252]   - Booting up control plane ...
	I1206 09:03:29.825981  205517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:03:29.826075  205517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:03:29.826675  205517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:03:29.840479  205517 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:03:29.840602  205517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:03:29.849015  205517 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:03:29.849313  205517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:03:29.849359  205517 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:03:29.951699  205517 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:03:29.951860  205517 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:03:30.453368  205517 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.738543ms
	I1206 09:03:30.456433  205517 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:03:30.456601  205517 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8555/livez
	I1206 09:03:30.456740  205517 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:03:30.456859  205517 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:03:29.999120  203960 out.go:252]   - Booting up control plane ...
	I1206 09:03:29.999229  203960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:03:29.999315  203960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:03:29.999422  203960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:03:30.013576  203960 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:03:30.013768  203960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:03:30.021470  203960 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:03:30.021772  203960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:03:30.021833  203960 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:03:30.154046  203960 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:03:30.154218  203960 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:03:31.155164  203960 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001282364s
	I1206 09:03:31.160185  203960 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:03:31.160309  203960 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:03:31.160451  203960 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:03:31.160563  203960 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:03:33.137137  205517 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.680601887s
	I1206 09:03:34.131973  205517 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.675415181s
	I1206 09:03:34.958120  205517 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50155539s
	I1206 09:03:34.977913  205517 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:03:34.988981  205517 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:03:35.001941  205517 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:03:35.002310  205517 kubeadm.go:319] [mark-control-plane] Marking the node cert-options-011599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:03:35.011973  205517 kubeadm.go:319] [bootstrap-token] Using token: hes1x1.cqcy7hlgwt1ngsm8
	W1206 09:03:30.282744  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	W1206 09:03:32.783378  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.317577035Z" level=info msg="RDT not available in the host system"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.317590776Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.318544442Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.318564969Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.318579792Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.319327931Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.319344125Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.323185616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.323210419Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.323686081Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.324052556Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.324099811Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.407556301Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-txc4m Namespace:kube-system ID:4b8212f14913a8c6169c35eadea57f03d5d4339736e2e4f0e1f4f74cd770ec3d UID:b6adb37f-ed13-4c7b-b443-6c918b25c752 NetNS:/var/run/netns/b8e7c52a-861d-44df-b1fe-7d85c0e29bfd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001326c8}] Aliases:map[]}"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.407744555Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-txc4m for CNI network kindnet (type=ptp)"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408192812Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408217786Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408274311Z" level=info msg="Create NRI interface"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408401307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408416655Z" level=info msg="runtime interface created"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408429482Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408436854Z" level=info msg="runtime interface starting up..."
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408442342Z" level=info msg="starting plugins..."
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408454738Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.40883244Z" level=info msg="No systemd watchdog enabled"
	Dec 06 09:03:29 pause-845581 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	aaca6c4c6a8d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   4b8212f14913a       coredns-66bc5c9577-txc4m               kube-system
	eec9250853c2c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   23 seconds ago      Running             kube-proxy                0                   3c8b30dc207a9       kube-proxy-qw24c                       kube-system
	1cd69e4c42603       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   d42009f496a5e       kindnet-z5h5d                          kube-system
	f2ceef154e22a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   34 seconds ago      Running             etcd                      0                   80bd18230ba40       etcd-pause-845581                      kube-system
	83bc849744de0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   34 seconds ago      Running             kube-apiserver            0                   7a58e8f1803b7       kube-apiserver-pause-845581            kube-system
	3d999efa25cdc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   34 seconds ago      Running             kube-scheduler            0                   43ff2144e2164       kube-scheduler-pause-845581            kube-system
	0d73f16cec903       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   34 seconds ago      Running             kube-controller-manager   0                   5ff665d45893a       kube-controller-manager-pause-845581   kube-system
	
	
	==> coredns [aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50891 - 19353 "HINFO IN 6307950807946082642.8606806498488255531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032217575s
	
	
	==> describe nodes <==
	Name:               pause-845581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-845581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=pause-845581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_03_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-845581
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:03:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-845581
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c5de0118-fe74-4283-96bb-752ca8539259
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-txc4m                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-845581                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-z5h5d                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-845581             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-845581    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-qw24c                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-845581             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-845581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-845581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-845581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-845581 event: Registered Node pause-845581 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-845581 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa] <==
	{"level":"warn","ts":"2025-12-06T09:03:03.518923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.526391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.536752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.548302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.556555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.566341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.575604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.587822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.597856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.610799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.622068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.636950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.736606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59694","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:03:16.304307Z","caller":"traceutil/trace.go:172","msg":"trace[284539081] linearizableReadLoop","detail":"{readStateIndex:391; appliedIndex:391; }","duration":"112.373455ms","start":"2025-12-06T09:03:16.191903Z","end":"2025-12-06T09:03:16.304276Z","steps":["trace[284539081] 'read index received'  (duration: 112.364516ms)","trace[284539081] 'applied index is now lower than readState.Index'  (duration: 7.661µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:03:16.304452Z","caller":"traceutil/trace.go:172","msg":"trace[608844041] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"138.112053ms","start":"2025-12-06T09:03:16.166325Z","end":"2025-12-06T09:03:16.304437Z","steps":["trace[608844041] 'process raft request'  (duration: 137.992209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:03:16.304533Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.617117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:03:16.306356Z","caller":"traceutil/trace.go:172","msg":"trace[1312212085] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:379; }","duration":"114.449266ms","start":"2025-12-06T09:03:16.191892Z","end":"2025-12-06T09:03:16.306341Z","steps":["trace[1312212085] 'agreement among raft nodes before linearized reading'  (duration: 112.596857ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:03:16.544211Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.97399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-845581\" limit:1 ","response":"range_response_count:1 size:5997"}
	{"level":"info","ts":"2025-12-06T09:03:16.544282Z","caller":"traceutil/trace.go:172","msg":"trace[1397624417] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-845581; range_end:; response_count:1; response_revision:379; }","duration":"131.057799ms","start":"2025-12-06T09:03:16.413210Z","end":"2025-12-06T09:03:16.544268Z","steps":["trace[1397624417] 'agreement among raft nodes before linearized reading'  (duration: 25.406879ms)","trace[1397624417] 'range keys from in-memory index tree'  (duration: 105.525047ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:03:16.544774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.601804ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790495131003811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.103.2\" mod_revision:204 > success:<request_put:<key:\"/registry/masterleases/192.168.103.2\" value_size:66 lease:4650418458276228000 >> failure:<request_range:<key:\"/registry/masterleases/192.168.103.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:03:16.544844Z","caller":"traceutil/trace.go:172","msg":"trace[1273427823] linearizableReadLoop","detail":"{readStateIndex:394; appliedIndex:393; }","duration":"101.969036ms","start":"2025-12-06T09:03:16.442865Z","end":"2025-12-06T09:03:16.544834Z","steps":["trace[1273427823] 'read index received'  (duration: 25.822µs)","trace[1273427823] 'applied index is now lower than readState.Index'  (duration: 101.942641ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:03:16.544889Z","caller":"traceutil/trace.go:172","msg":"trace[1500732150] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"147.590968ms","start":"2025-12-06T09:03:16.397275Z","end":"2025-12-06T09:03:16.544866Z","steps":["trace[1500732150] 'process raft request'  (duration: 41.369196ms)","trace[1500732150] 'compare'  (duration: 105.505322ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:03:16.544959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.094534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-845581\" limit:1 ","response":"range_response_count:1 size:5560"}
	{"level":"info","ts":"2025-12-06T09:03:16.545001Z","caller":"traceutil/trace.go:172","msg":"trace[403124762] range","detail":"{range_begin:/registry/minions/pause-845581; range_end:; response_count:1; response_revision:380; }","duration":"102.126118ms","start":"2025-12-06T09:03:16.442855Z","end":"2025-12-06T09:03:16.544981Z","steps":["trace[403124762] 'agreement among raft nodes before linearized reading'  (duration: 102.018169ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:03:16.652248Z","caller":"traceutil/trace.go:172","msg":"trace[1236967041] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"100.322285ms","start":"2025-12-06T09:03:16.551906Z","end":"2025-12-06T09:03:16.652229Z","steps":["trace[1236967041] 'process raft request'  (duration: 94.994475ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:03:36 up 46 min,  0 user,  load average: 2.90, 1.74, 1.31
	Linux pause-845581 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252] <==
	I1206 09:03:13.049175       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:03:13.049435       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:03:13.049583       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:03:13.049609       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:03:13.049637       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:03:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:03:13.345861       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:03:13.346112       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:03:13.346132       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:03:13.346281       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:03:13.743458       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:03:13.743585       1 metrics.go:72] Registering metrics
	I1206 09:03:13.743682       1 controller.go:711] "Syncing nftables rules"
	I1206 09:03:23.352116       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:03:23.352185       1 main.go:301] handling current node
	I1206 09:03:33.354073       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:03:33.354103       1 main.go:301] handling current node
	
	
	==> kube-apiserver [83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce] <==
	I1206 09:03:04.490815       1 policy_source.go:240] refreshing policies
	I1206 09:03:04.491414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:03:04.497874       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:04.498827       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:03:04.508254       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:04.510832       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:03:04.546365       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:03:04.564274       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:03:05.368352       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:03:05.373026       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:03:05.373043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:03:06.093409       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:03:06.158945       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:03:06.293714       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:03:06.306847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1206 09:03:06.308675       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:03:06.313814       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:03:06.520298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:03:07.388470       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:03:07.403255       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:03:07.424939       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:03:12.216290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:12.221091       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:12.509442       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1206 09:03:12.611976       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233] <==
	I1206 09:03:11.506503       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:03:11.506511       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:03:11.506513       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:03:11.506489       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:03:11.506894       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:03:11.507032       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:03:11.507166       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:03:11.507541       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:03:11.507693       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:03:11.510805       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:03:11.510856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:03:11.511119       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:03:11.511184       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:03:11.511246       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:03:11.511256       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:03:11.511262       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:03:11.512142       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:03:11.513194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:03:11.513781       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:03:11.516530       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:03:11.518093       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-845581" podCIDRs=["10.244.0.0/24"]
	I1206 09:03:11.522184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:03:11.528347       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:03:11.531608       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:03:26.482937       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0] <==
	I1206 09:03:12.948980       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:03:13.017602       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:03:13.118012       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:03:13.118051       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:03:13.118137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:03:13.140094       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:03:13.140185       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:03:13.147300       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:03:13.147939       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:03:13.147972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:03:13.149536       1 config.go:200] "Starting service config controller"
	I1206 09:03:13.149558       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:03:13.149591       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:03:13.149598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:03:13.149615       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:03:13.149629       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:03:13.149931       1 config.go:309] "Starting node config controller"
	I1206 09:03:13.150002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:03:13.150036       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:03:13.250579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:03:13.250624       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:03:13.250603       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3] <==
	E1206 09:03:04.527770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:03:04.527826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:03:04.527873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:03:04.527951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:03:04.528165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:03:04.528257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:03:04.528257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:03:04.528318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:03:04.528368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:03:04.528396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:03:04.528438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:03:04.531921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:03:04.533349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:03:05.381730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:03:05.383589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:03:05.389532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:03:05.430014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:03:05.508728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:03:05.542379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:03:05.563654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:03:05.608266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:03:05.648497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:03:05.669352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:03:05.787725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:03:07.515298       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:03:08 pause-845581 kubelet[1310]: E1206 09:03:08.411532    1310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-845581\" already exists" pod="kube-system/kube-scheduler-pause-845581"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.445581    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-845581" podStartSLOduration=2.445548424 podStartE2EDuration="2.445548424s" podCreationTimestamp="2025-12-06 09:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.428539462 +0000 UTC m=+1.251470457" watchObservedRunningTime="2025-12-06 09:03:08.445548424 +0000 UTC m=+1.268479427"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.485319    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-845581" podStartSLOduration=1.485297412 podStartE2EDuration="1.485297412s" podCreationTimestamp="2025-12-06 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.454641873 +0000 UTC m=+1.277572874" watchObservedRunningTime="2025-12-06 09:03:08.485297412 +0000 UTC m=+1.308228415"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.485487    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-845581" podStartSLOduration=3.485478736 podStartE2EDuration="3.485478736s" podCreationTimestamp="2025-12-06 09:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.48359497 +0000 UTC m=+1.306525973" watchObservedRunningTime="2025-12-06 09:03:08.485478736 +0000 UTC m=+1.308409739"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.519628    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-845581" podStartSLOduration=1.519607593 podStartE2EDuration="1.519607593s" podCreationTimestamp="2025-12-06 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.502244884 +0000 UTC m=+1.325175903" watchObservedRunningTime="2025-12-06 09:03:08.519607593 +0000 UTC m=+1.342538598"
	Dec 06 09:03:11 pause-845581 kubelet[1310]: I1206 09:03:11.588069    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:03:11 pause-845581 kubelet[1310]: I1206 09:03:11.588807    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550397    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b45c72b9-d95b-4226-a8f5-e4c45609d742-cni-cfg\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550448    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b45c72b9-d95b-4226-a8f5-e4c45609d742-lib-modules\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550473    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjlj\" (UniqueName: \"kubernetes.io/projected/b45c72b9-d95b-4226-a8f5-e4c45609d742-kube-api-access-xnjlj\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550507    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e3bfb60-eb08-406f-ba98-8595995bc552-kube-proxy\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550529    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e3bfb60-eb08-406f-ba98-8595995bc552-xtables-lock\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550548    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e3bfb60-eb08-406f-ba98-8595995bc552-lib-modules\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550576    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwzj\" (UniqueName: \"kubernetes.io/projected/6e3bfb60-eb08-406f-ba98-8595995bc552-kube-api-access-knwzj\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550604    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b45c72b9-d95b-4226-a8f5-e4c45609d742-xtables-lock\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:13 pause-845581 kubelet[1310]: I1206 09:03:13.418668    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qw24c" podStartSLOduration=1.418317829 podStartE2EDuration="1.418317829s" podCreationTimestamp="2025-12-06 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:13.418058574 +0000 UTC m=+6.240989579" watchObservedRunningTime="2025-12-06 09:03:13.418317829 +0000 UTC m=+6.241248832"
	Dec 06 09:03:14 pause-845581 kubelet[1310]: I1206 09:03:14.134597    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z5h5d" podStartSLOduration=2.134573348 podStartE2EDuration="2.134573348s" podCreationTimestamp="2025-12-06 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:13.447286186 +0000 UTC m=+6.270217192" watchObservedRunningTime="2025-12-06 09:03:14.134573348 +0000 UTC m=+6.957504354"
	Dec 06 09:03:23 pause-845581 kubelet[1310]: I1206 09:03:23.688007    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:03:23 pause-845581 kubelet[1310]: I1206 09:03:23.736884    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkmpk\" (UniqueName: \"kubernetes.io/projected/b6adb37f-ed13-4c7b-b443-6c918b25c752-kube-api-access-lkmpk\") pod \"coredns-66bc5c9577-txc4m\" (UID: \"b6adb37f-ed13-4c7b-b443-6c918b25c752\") " pod="kube-system/coredns-66bc5c9577-txc4m"
	Dec 06 09:03:23 pause-845581 kubelet[1310]: I1206 09:03:23.736947    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6adb37f-ed13-4c7b-b443-6c918b25c752-config-volume\") pod \"coredns-66bc5c9577-txc4m\" (UID: \"b6adb37f-ed13-4c7b-b443-6c918b25c752\") " pod="kube-system/coredns-66bc5c9577-txc4m"
	Dec 06 09:03:24 pause-845581 kubelet[1310]: I1206 09:03:24.443218    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-txc4m" podStartSLOduration=12.443199396 podStartE2EDuration="12.443199396s" podCreationTimestamp="2025-12-06 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:24.443081523 +0000 UTC m=+17.266012526" watchObservedRunningTime="2025-12-06 09:03:24.443199396 +0000 UTC m=+17.266130402"
	Dec 06 09:03:33 pause-845581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:03:33 pause-845581 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:03:33 pause-845581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:03:33 pause-845581 systemd[1]: kubelet.service: Consumed 1.198s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-845581 -n pause-845581
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-845581 -n pause-845581: exit status 2 (413.080436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-845581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-845581
helpers_test.go:243: (dbg) docker inspect pause-845581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca",
	        "Created": "2025-12-06T09:02:51.516470191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:02:51.565506381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/hosts",
	        "LogPath": "/var/lib/docker/containers/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca/3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca-json.log",
	        "Name": "/pause-845581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-845581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-845581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3296925cd4fc4b59beb950a901cdc94b53622f7dba7eff7e5e87fe8d13c5c9ca",
	                "LowerDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce1291bdb12a441022c2c4203b69edb0a46f44c73a50ab80980100e45c3cd17e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-845581",
	                "Source": "/var/lib/docker/volumes/pause-845581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-845581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-845581",
	                "name.minikube.sigs.k8s.io": "pause-845581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5aef11461cacedaa8351eb09a068f4dea864d4e3338e79691743012ada6e6d85",
	            "SandboxKey": "/var/run/docker/netns/5aef11461cac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-845581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5e81e2c30e63a7438fc13729f8bc264fa4f779a1240d65d428ce42a49c15327c",
	                    "EndpointID": "4fb94564c302704ab454a36d856c03f731b48339a25f3372a14929e7844faf2e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "0e:3d:68:c6:bf:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-845581",
	                        "3296925cd4fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-845581 -n pause-845581
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-845581 -n pause-845581: exit status 2 (389.53969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-845581 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-845581 logs -n 25: (1.10502498s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-735357 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                          │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:01 UTC │
	│ stop    │ -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr                                                                                                                                                             │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr                                                                                                                                                             │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr                                                                                                                                                             │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │ 06 Dec 25 09:01 UTC │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │                     │
	│ stop    │ -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:01 UTC │ 06 Dec 25 09:02 UTC │
	│ delete  │ -p scheduled-stop-735357                                                                                                                                                                                                  │ scheduled-stop-735357       │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ start   │ -p insufficient-storage-423078 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-423078 │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │                     │
	│ delete  │ -p insufficient-storage-423078                                                                                                                                                                                            │ insufficient-storage-423078 │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ start   │ -p force-systemd-flag-124894 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-124894   │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p offline-crio-829666 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-829666         │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │                     │
	│ start   │ -p pause-845581 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-845581                │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p force-systemd-env-894703 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-894703    │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:03 UTC │
	│ ssh     │ force-systemd-flag-124894 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-124894   │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ delete  │ -p force-systemd-flag-124894                                                                                                                                                                                              │ force-systemd-flag-124894   │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p cert-expiration-006207 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-006207      │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │                     │
	│ delete  │ -p force-systemd-env-894703                                                                                                                                                                                               │ force-systemd-env-894703    │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ start   │ -p cert-options-011599 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-011599         │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │                     │
	│ start   │ -p pause-845581 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-845581                │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ pause   │ -p pause-845581 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-845581                │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:03:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:03:26.360049  209035 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:03:26.360326  209035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:03:26.360337  209035 out.go:374] Setting ErrFile to fd 2...
	I1206 09:03:26.360343  209035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:03:26.360634  209035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:03:26.361165  209035 out.go:368] Setting JSON to false
	I1206 09:03:26.362621  209035 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2757,"bootTime":1765009049,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:03:26.362691  209035 start.go:143] virtualization: kvm guest
	I1206 09:03:26.364720  209035 out.go:179] * [pause-845581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:03:26.366196  209035 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:03:26.366195  209035 notify.go:221] Checking for updates...
	I1206 09:03:26.367687  209035 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:03:26.369084  209035 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:03:26.370300  209035 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:03:26.371625  209035 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:03:26.372834  209035 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:03:26.375289  209035 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:26.375931  209035 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:03:26.401805  209035 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:03:26.401894  209035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:03:26.467828  209035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:03:26.455458844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:03:26.467956  209035 docker.go:319] overlay module found
	I1206 09:03:26.470309  209035 out.go:179] * Using the docker driver based on existing profile
	I1206 09:03:26.474168  209035 start.go:309] selected driver: docker
	I1206 09:03:26.474200  209035 start.go:927] validating driver "docker" against &{Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:26.474352  209035 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:03:26.474471  209035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:03:26.543627  209035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:03:26.533885971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:03:26.544464  209035 cni.go:84] Creating CNI manager for ""
	I1206 09:03:26.544547  209035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:26.544611  209035 start.go:353] cluster config:
	{Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:26.546938  209035 out.go:179] * Starting "pause-845581" primary control-plane node in "pause-845581" cluster
	I1206 09:03:26.548845  209035 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:03:26.550201  209035 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:03:26.551266  209035 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:03:26.551308  209035 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:03:26.551321  209035 cache.go:65] Caching tarball of preloaded images
	I1206 09:03:26.551370  209035 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:03:26.551408  209035 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:03:26.551418  209035 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:03:26.551597  209035 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/config.json ...
	I1206 09:03:26.575185  209035 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:03:26.575205  209035 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:03:26.575226  209035 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:03:26.575258  209035 start.go:360] acquireMachinesLock for pause-845581: {Name:mk83e33767982839af7cff5ab6a30e1596ccbe89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:03:26.575334  209035 start.go:364] duration metric: took 44.023µs to acquireMachinesLock for "pause-845581"
	I1206 09:03:26.575354  209035 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:03:26.575364  209035 fix.go:54] fixHost starting: 
	I1206 09:03:26.575659  209035 cli_runner.go:164] Run: docker container inspect pause-845581 --format={{.State.Status}}
	I1206 09:03:26.595468  209035 fix.go:112] recreateIfNeeded on pause-845581: state=Running err=<nil>
	W1206 09:03:26.595509  209035 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:03:25.560083  205517 cli_runner.go:164] Run: docker network inspect cert-options-011599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:03:25.577977  205517 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:03:25.582100  205517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:03:25.592383  205517 kubeadm.go:884] updating cluster {Name:cert-options-011599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-011599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:03:25.592531  205517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:03:25.592584  205517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:25.624926  205517 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:25.624939  205517 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:03:25.625000  205517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:25.650376  205517 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:25.650394  205517 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:03:25.650401  205517 kubeadm.go:935] updating node { 192.168.94.2 8555 v1.34.2 crio true true} ...
	I1206 09:03:25.650494  205517 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-options-011599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-options-011599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:03:25.650570  205517 ssh_runner.go:195] Run: crio config
	I1206 09:03:25.696010  205517 cni.go:84] Creating CNI manager for ""
	I1206 09:03:25.696028  205517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:25.696045  205517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:03:25.696151  205517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8555 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-011599 NodeName:cert-options-011599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:03:25.696350  205517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-011599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:03:25.696439  205517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:03:25.704763  205517 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:03:25.704837  205517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:03:25.713520  205517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1206 09:03:25.727585  205517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:03:25.744306  205517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1206 09:03:25.757132  205517 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:03:25.760832  205517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:03:25.771325  205517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:25.853953  205517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:25.875410  205517 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599 for IP: 192.168.94.2
	I1206 09:03:25.875430  205517 certs.go:195] generating shared ca certs ...
	I1206 09:03:25.875447  205517 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:25.875598  205517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:03:25.875631  205517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:03:25.875637  205517 certs.go:257] generating profile certs ...
	I1206 09:03:25.875692  205517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.key
	I1206 09:03:25.875700  205517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.crt with IP's: []
	I1206 09:03:25.959859  205517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.crt ...
	I1206 09:03:25.959874  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.crt: {Name:mk26f380aa9c4f22a560e9774cdcf1301eb7748c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:25.960070  205517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.key ...
	I1206 09:03:25.960081  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/client.key: {Name:mkdf90b5ef0cd13798706908b50265546f883bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:25.960175  205517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a
	I1206 09:03:25.960185  205517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:03:26.024250  205517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a ...
	I1206 09:03:26.024265  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a: {Name:mkdc27904be0d380fe8cb1eb4c6d6ffdceb9adda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.024424  205517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a ...
	I1206 09:03:26.024432  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a: {Name:mk7da19ae9c476dca53b275364ef8aa1f49ddad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.024504  205517 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt.f30fe77a -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt
	I1206 09:03:26.024570  205517 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key.f30fe77a -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key
	I1206 09:03:26.024616  205517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key
	I1206 09:03:26.024628  205517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt with IP's: []
	I1206 09:03:26.051882  205517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt ...
	I1206 09:03:26.051897  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt: {Name:mkf564043675ccee8ed8d5914cc4a0da3d23cf19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.052071  205517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key ...
	I1206 09:03:26.052084  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key: {Name:mk53609c91a7058ef7c0bff8e83ac9fa02542e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:26.052306  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:03:26.052340  205517 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:03:26.052346  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:03:26.052370  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:03:26.052397  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:03:26.052435  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:03:26.052474  205517 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:03:26.053023  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:03:26.071904  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:03:26.089458  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:03:26.106242  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:03:26.123716  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I1206 09:03:26.141166  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:03:26.158543  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:03:26.176552  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/cert-options-011599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:03:26.194351  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:03:26.213953  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:03:26.233119  205517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:03:26.252366  205517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:03:26.266812  205517 ssh_runner.go:195] Run: openssl version
	I1206 09:03:26.273549  205517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.281782  205517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:03:26.291153  205517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.298764  205517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.298831  205517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:26.342053  205517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:03:26.350397  205517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:03:26.358958  205517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.367012  205517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:03:26.375134  205517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.379484  205517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.379535  205517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:03:26.424289  205517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:03:26.434963  205517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:03:26.444254  205517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.452680  205517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:03:26.462485  205517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.467672  205517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.467726  205517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:03:26.518096  205517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:03:26.526463  205517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:03:26.534823  205517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:03:26.539088  205517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:03:26.539147  205517 kubeadm.go:401] StartCluster: {Name:cert-options-011599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-011599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:26.539229  205517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:03:26.539291  205517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:03:26.570754  205517 cri.go:89] found id: ""
	I1206 09:03:26.570826  205517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:03:26.579929  205517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:03:26.588682  205517 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:03:26.588723  205517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:03:26.597182  205517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:03:26.597191  205517 kubeadm.go:158] found existing configuration files:
	
	I1206 09:03:26.597230  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I1206 09:03:26.605210  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:03:26.605261  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:03:26.613642  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I1206 09:03:26.621661  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:03:26.621715  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:03:26.629405  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I1206 09:03:26.637402  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:03:26.637445  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:03:26.645248  205517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I1206 09:03:26.652718  205517 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:03:26.652757  205517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:03:26.660865  205517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:03:26.699894  205517 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:03:26.699938  205517 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:03:26.721271  205517 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:03:26.721321  205517 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:03:26.721347  205517 kubeadm.go:319] OS: Linux
	I1206 09:03:26.721381  205517 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:03:26.721419  205517 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:03:26.721455  205517 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:03:26.721491  205517 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:03:26.721526  205517 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:03:26.721566  205517 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:03:26.721604  205517 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:03:26.721636  205517 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:03:26.786939  205517 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:03:26.787135  205517 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:03:26.787282  205517 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:03:26.795542  205517 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:03:26.797494  205517 out.go:252]   - Generating certificates and keys ...
	I1206 09:03:26.797575  205517 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:03:26.797644  205517 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:03:27.106501  205517 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:03:27.490407  205517 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:03:25.184788  203960 out.go:252]   - Generating certificates and keys ...
	I1206 09:03:25.184896  203960 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:03:25.185009  203960 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:03:25.340784  203960 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:03:25.514196  203960 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:03:25.818314  203960 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:03:26.230091  203960 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:03:26.439733  203960 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:03:26.439960  203960 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-006207 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:03:26.773805  203960 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:03:26.774294  203960 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-006207 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:03:27.076500  203960 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:03:27.116428  203960 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:03:27.479530  203960 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:03:27.479758  203960 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:03:27.631774  203960 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:03:26.597665  209035 out.go:252] * Updating the running docker "pause-845581" container ...
	I1206 09:03:26.597706  209035 machine.go:94] provisionDockerMachine start ...
	I1206 09:03:26.597760  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:26.616277  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:26.616530  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:26.616551  209035 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:03:26.747780  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-845581
	
	I1206 09:03:26.747808  209035 ubuntu.go:182] provisioning hostname "pause-845581"
	I1206 09:03:26.747884  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:26.770732  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:26.771077  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:26.771096  209035 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-845581 && echo "pause-845581" | sudo tee /etc/hostname
	I1206 09:03:26.911861  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-845581
	
	I1206 09:03:26.911973  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:26.933108  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:26.933314  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:26.933333  209035 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-845581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-845581/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-845581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:03:27.062732  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:03:27.062761  209035 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:03:27.062804  209035 ubuntu.go:190] setting up certificates
	I1206 09:03:27.062815  209035 provision.go:84] configureAuth start
	I1206 09:03:27.062879  209035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845581
	I1206 09:03:27.084783  209035 provision.go:143] copyHostCerts
	I1206 09:03:27.084864  209035 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:03:27.084878  209035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:03:27.084956  209035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:03:27.085089  209035 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:03:27.085101  209035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:03:27.085137  209035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:03:27.085237  209035 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:03:27.085246  209035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:03:27.085278  209035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:03:27.085357  209035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.pause-845581 san=[127.0.0.1 192.168.103.2 localhost minikube pause-845581]
	I1206 09:03:27.306973  209035 provision.go:177] copyRemoteCerts
	I1206 09:03:27.307040  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:03:27.307071  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:27.326172  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:27.420389  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:03:27.438669  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:03:27.458208  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:03:27.478826  209035 provision.go:87] duration metric: took 415.993754ms to configureAuth
	I1206 09:03:27.478862  209035 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:03:27.479140  209035 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:27.479271  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:27.502696  209035 main.go:143] libmachine: Using SSH client type: native
	I1206 09:03:27.502949  209035 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1206 09:03:27.502969  209035 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:03:27.839305  209035 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:03:27.839334  209035 machine.go:97] duration metric: took 1.2416207s to provisionDockerMachine
	I1206 09:03:27.839347  209035 start.go:293] postStartSetup for "pause-845581" (driver="docker")
	I1206 09:03:27.839359  209035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:03:27.839436  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:03:27.839498  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:27.859095  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:27.954264  209035 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:03:27.958673  209035 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:03:27.958699  209035 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:03:27.958711  209035 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:03:27.958779  209035 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:03:27.958874  209035 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:03:27.959019  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:03:27.967824  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:03:27.989736  209035 start.go:296] duration metric: took 150.37265ms for postStartSetup
	I1206 09:03:27.989817  209035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:03:27.989865  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:28.010351  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:28.103954  209035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:03:28.109415  209035 fix.go:56] duration metric: took 1.534045036s for fixHost
	I1206 09:03:28.109446  209035 start.go:83] releasing machines lock for "pause-845581", held for 1.534100146s
	I1206 09:03:28.109522  209035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845581
	I1206 09:03:28.129326  209035 ssh_runner.go:195] Run: cat /version.json
	I1206 09:03:28.129387  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:28.129432  209035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:03:28.129524  209035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845581
	I1206 09:03:28.150862  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:28.151574  209035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/pause-845581/id_rsa Username:docker}
	I1206 09:03:28.301937  209035 ssh_runner.go:195] Run: systemctl --version
	I1206 09:03:28.308735  209035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:03:28.345933  209035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:03:28.350982  209035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:03:28.351071  209035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:03:28.359653  209035 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:03:28.359680  209035 start.go:496] detecting cgroup driver to use...
	I1206 09:03:28.359709  209035 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:03:28.359743  209035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:03:28.374642  209035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:03:28.387742  209035 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:03:28.387789  209035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:03:28.403749  209035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:03:28.417728  209035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:03:28.557242  209035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:03:28.700910  209035 docker.go:234] disabling docker service ...
	I1206 09:03:28.701003  209035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:03:28.717237  209035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:03:28.733697  209035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:03:28.858424  209035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:03:28.979705  209035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:03:28.995098  209035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:03:29.010956  209035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:03:29.011032  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.020271  209035 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:03:29.020330  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.029397  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.038565  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.047430  209035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:03:29.055736  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.065317  209035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.074184  209035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:03:29.083616  209035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:03:29.091078  209035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:03:29.098592  209035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:29.210329  209035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:03:29.413711  209035 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:03:29.413776  209035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:03:29.418103  209035 start.go:564] Will wait 60s for crictl version
	I1206 09:03:29.418161  209035 ssh_runner.go:195] Run: which crictl
	I1206 09:03:29.421890  209035 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:03:29.447793  209035 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:03:29.447873  209035 ssh_runner.go:195] Run: crio --version
	I1206 09:03:29.478884  209035 ssh_runner.go:195] Run: crio --version
	I1206 09:03:29.514368  209035 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:03:27.999839  205517 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:03:28.088028  205517 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:03:28.245547  205517 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:03:28.245808  205517 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-options-011599 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:03:28.685093  205517 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:03:28.685248  205517 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-options-011599 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:03:28.834950  205517 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:03:29.085664  205517 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:03:29.163957  205517 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:03:29.164072  205517 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:03:29.282275  205517 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:03:29.327978  205517 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:03:29.399666  205517 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:03:29.521565  205517 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:03:29.819713  205517 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:03:29.820221  205517 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:03:29.824038  205517 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:03:28.920245  203960 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:03:29.173304  203960 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:03:29.713682  203960 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:03:29.991146  203960 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:03:29.991915  203960 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:03:29.996286  203960 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1206 09:03:25.283092  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	W1206 09:03:27.782657  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	I1206 09:03:29.515651  209035 cli_runner.go:164] Run: docker network inspect pause-845581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:03:29.535659  209035 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:03:29.539938  209035 kubeadm.go:884] updating cluster {Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:03:29.540110  209035 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:03:29.540166  209035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:29.575775  209035 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:29.575794  209035 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:03:29.575838  209035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:03:29.602053  209035 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:03:29.602076  209035 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:03:29.602084  209035 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1206 09:03:29.602182  209035 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-845581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:03:29.602254  209035 ssh_runner.go:195] Run: crio config
	I1206 09:03:29.654124  209035 cni.go:84] Creating CNI manager for ""
	I1206 09:03:29.654146  209035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:29.654162  209035 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:03:29.654188  209035 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-845581 NodeName:pause-845581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:03:29.654356  209035 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-845581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:03:29.654427  209035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:03:29.662880  209035 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:03:29.662935  209035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:03:29.671252  209035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 09:03:29.684189  209035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:03:29.697894  209035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1206 09:03:29.712889  209035 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:03:29.717720  209035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:29.847064  209035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:29.861945  209035 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581 for IP: 192.168.103.2
	I1206 09:03:29.861969  209035 certs.go:195] generating shared ca certs ...
	I1206 09:03:29.862016  209035 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:29.862161  209035 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:03:29.862222  209035 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:03:29.862238  209035 certs.go:257] generating profile certs ...
	I1206 09:03:29.862348  209035 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.key
	I1206 09:03:29.862445  209035 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/apiserver.key.133c68b5
	I1206 09:03:29.862504  209035 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/proxy-client.key
	I1206 09:03:29.862630  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:03:29.862677  209035 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:03:29.862692  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:03:29.862732  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:03:29.862768  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:03:29.862803  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:03:29.862860  209035 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:03:29.863509  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:03:29.888298  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:03:29.906357  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:03:29.924155  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:03:29.941804  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:03:29.960813  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:03:29.979325  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:03:30.000762  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:03:30.019277  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:03:30.037718  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:03:30.061205  209035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:03:30.080390  209035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:03:30.093403  209035 ssh_runner.go:195] Run: openssl version
	I1206 09:03:30.099416  209035 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.107030  209035 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:03:30.114724  209035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.118467  209035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.118516  209035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:03:30.161802  209035 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:03:30.170577  209035 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.179753  209035 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:03:30.188179  209035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.193743  209035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.193806  209035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:03:30.229368  209035 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:03:30.237532  209035 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.245243  209035 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:03:30.253773  209035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.257934  209035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.258017  209035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:03:30.293733  209035 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:03:30.301734  209035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:03:30.305646  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:03:30.340932  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:03:30.376546  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:03:30.416447  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:03:30.464776  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:03:30.500266  209035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:03:30.534941  209035 kubeadm.go:401] StartCluster: {Name:pause-845581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-845581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:03:30.535113  209035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:03:30.535169  209035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:03:30.565689  209035 cri.go:89] found id: "aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8"
	I1206 09:03:30.565713  209035 cri.go:89] found id: "eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0"
	I1206 09:03:30.565718  209035 cri.go:89] found id: "1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252"
	I1206 09:03:30.565724  209035 cri.go:89] found id: "f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa"
	I1206 09:03:30.565728  209035 cri.go:89] found id: "83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce"
	I1206 09:03:30.565732  209035 cri.go:89] found id: "3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3"
	I1206 09:03:30.565737  209035 cri.go:89] found id: "0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233"
	I1206 09:03:30.565742  209035 cri.go:89] found id: ""
	I1206 09:03:30.565785  209035 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:03:30.577648  209035 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:03:30Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:03:30.577704  209035 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:03:30.585619  209035 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:03:30.585640  209035 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:03:30.585685  209035 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:03:30.593576  209035 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:03:30.594427  209035 kubeconfig.go:125] found "pause-845581" server: "https://192.168.103.2:8443"
	I1206 09:03:30.595545  209035 kapi.go:59] client config for pause-845581: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:03:30.596081  209035 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 09:03:30.596101  209035 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 09:03:30.596111  209035 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 09:03:30.596117  209035 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 09:03:30.596124  209035 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 09:03:30.596524  209035 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:03:30.604849  209035 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1206 09:03:30.604878  209035 kubeadm.go:602] duration metric: took 19.23166ms to restartPrimaryControlPlane
	I1206 09:03:30.604887  209035 kubeadm.go:403] duration metric: took 69.958239ms to StartCluster
	I1206 09:03:30.604904  209035 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:30.604968  209035 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:03:30.606133  209035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:30.606425  209035 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:03:30.606542  209035 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:03:30.606632  209035 config.go:182] Loaded profile config "pause-845581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:30.608211  209035 out.go:179] * Enabled addons: 
	I1206 09:03:30.608226  209035 out.go:179] * Verifying Kubernetes components...
	I1206 09:03:30.609470  209035 addons.go:530] duration metric: took 2.934726ms for enable addons: enabled=[]
	I1206 09:03:30.609498  209035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:30.716604  209035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:30.730000  209035 node_ready.go:35] waiting up to 6m0s for node "pause-845581" to be "Ready" ...
	I1206 09:03:30.737520  209035 node_ready.go:49] node "pause-845581" is "Ready"
	I1206 09:03:30.737544  209035 node_ready.go:38] duration metric: took 7.500786ms for node "pause-845581" to be "Ready" ...
	I1206 09:03:30.737557  209035 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:03:30.737600  209035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:03:30.749715  209035 api_server.go:72] duration metric: took 143.255552ms to wait for apiserver process to appear ...
	I1206 09:03:30.749738  209035 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:03:30.749755  209035 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:03:30.754487  209035 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1206 09:03:30.755489  209035 api_server.go:141] control plane version: v1.34.2
	I1206 09:03:30.755518  209035 api_server.go:131] duration metric: took 5.773967ms to wait for apiserver health ...
	I1206 09:03:30.755529  209035 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:03:30.758958  209035 system_pods.go:59] 7 kube-system pods found
	I1206 09:03:30.759028  209035 system_pods.go:61] "coredns-66bc5c9577-txc4m" [b6adb37f-ed13-4c7b-b443-6c918b25c752] Running
	I1206 09:03:30.759043  209035 system_pods.go:61] "etcd-pause-845581" [08ae8880-2ddc-47b6-ab8b-d9f523cdaef6] Running
	I1206 09:03:30.759048  209035 system_pods.go:61] "kindnet-z5h5d" [b45c72b9-d95b-4226-a8f5-e4c45609d742] Running
	I1206 09:03:30.759055  209035 system_pods.go:61] "kube-apiserver-pause-845581" [23ac1868-3d81-4ab2-81a6-9b4656fa7798] Running
	I1206 09:03:30.759059  209035 system_pods.go:61] "kube-controller-manager-pause-845581" [b627fac7-11a5-4470-ae22-f256286ec572] Running
	I1206 09:03:30.759063  209035 system_pods.go:61] "kube-proxy-qw24c" [6e3bfb60-eb08-406f-ba98-8595995bc552] Running
	I1206 09:03:30.759067  209035 system_pods.go:61] "kube-scheduler-pause-845581" [5002f38a-f3e3-4b76-9b6e-3f59303b96b4] Running
	I1206 09:03:30.759076  209035 system_pods.go:74] duration metric: took 3.540471ms to wait for pod list to return data ...
	I1206 09:03:30.759085  209035 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:03:30.761085  209035 default_sa.go:45] found service account: "default"
	I1206 09:03:30.761105  209035 default_sa.go:55] duration metric: took 2.011465ms for default service account to be created ...
	I1206 09:03:30.761115  209035 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:03:30.763357  209035 system_pods.go:86] 7 kube-system pods found
	I1206 09:03:30.763379  209035 system_pods.go:89] "coredns-66bc5c9577-txc4m" [b6adb37f-ed13-4c7b-b443-6c918b25c752] Running
	I1206 09:03:30.763391  209035 system_pods.go:89] "etcd-pause-845581" [08ae8880-2ddc-47b6-ab8b-d9f523cdaef6] Running
	I1206 09:03:30.763399  209035 system_pods.go:89] "kindnet-z5h5d" [b45c72b9-d95b-4226-a8f5-e4c45609d742] Running
	I1206 09:03:30.763403  209035 system_pods.go:89] "kube-apiserver-pause-845581" [23ac1868-3d81-4ab2-81a6-9b4656fa7798] Running
	I1206 09:03:30.763407  209035 system_pods.go:89] "kube-controller-manager-pause-845581" [b627fac7-11a5-4470-ae22-f256286ec572] Running
	I1206 09:03:30.763428  209035 system_pods.go:89] "kube-proxy-qw24c" [6e3bfb60-eb08-406f-ba98-8595995bc552] Running
	I1206 09:03:30.763432  209035 system_pods.go:89] "kube-scheduler-pause-845581" [5002f38a-f3e3-4b76-9b6e-3f59303b96b4] Running
	I1206 09:03:30.763437  209035 system_pods.go:126] duration metric: took 2.317227ms to wait for k8s-apps to be running ...
	I1206 09:03:30.763443  209035 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:03:30.763481  209035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:03:30.780926  209035 system_svc.go:56] duration metric: took 17.473605ms WaitForService to wait for kubelet
	I1206 09:03:30.780959  209035 kubeadm.go:587] duration metric: took 174.502449ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:03:30.780979  209035 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:03:30.786736  209035 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:03:30.786769  209035 node_conditions.go:123] node cpu capacity is 8
	I1206 09:03:30.786786  209035 node_conditions.go:105] duration metric: took 5.8ms to run NodePressure ...
	I1206 09:03:30.786804  209035 start.go:242] waiting for startup goroutines ...
	I1206 09:03:30.786821  209035 start.go:247] waiting for cluster config update ...
	I1206 09:03:30.786836  209035 start.go:256] writing updated cluster config ...
	I1206 09:03:30.787232  209035 ssh_runner.go:195] Run: rm -f paused
	I1206 09:03:30.791430  209035 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:03:30.792227  209035 kapi.go:59] client config for pause-845581: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/pause-845581/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:03:30.795205  209035 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-txc4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.799615  209035 pod_ready.go:94] pod "coredns-66bc5c9577-txc4m" is "Ready"
	I1206 09:03:30.799635  209035 pod_ready.go:86] duration metric: took 4.411305ms for pod "coredns-66bc5c9577-txc4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.801542  209035 pod_ready.go:83] waiting for pod "etcd-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.805310  209035 pod_ready.go:94] pod "etcd-pause-845581" is "Ready"
	I1206 09:03:30.805328  209035 pod_ready.go:86] duration metric: took 3.768169ms for pod "etcd-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.807181  209035 pod_ready.go:83] waiting for pod "kube-apiserver-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.810336  209035 pod_ready.go:94] pod "kube-apiserver-pause-845581" is "Ready"
	I1206 09:03:30.810356  209035 pod_ready.go:86] duration metric: took 3.157807ms for pod "kube-apiserver-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:30.811941  209035 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.195296  209035 pod_ready.go:94] pod "kube-controller-manager-pause-845581" is "Ready"
	I1206 09:03:31.195324  209035 pod_ready.go:86] duration metric: took 383.363695ms for pod "kube-controller-manager-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.397093  209035 pod_ready.go:83] waiting for pod "kube-proxy-qw24c" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.796541  209035 pod_ready.go:94] pod "kube-proxy-qw24c" is "Ready"
	I1206 09:03:31.796571  209035 pod_ready.go:86] duration metric: took 399.446593ms for pod "kube-proxy-qw24c" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:31.996763  209035 pod_ready.go:83] waiting for pod "kube-scheduler-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:32.397751  209035 pod_ready.go:94] pod "kube-scheduler-pause-845581" is "Ready"
	I1206 09:03:32.397778  209035 pod_ready.go:86] duration metric: took 400.989902ms for pod "kube-scheduler-pause-845581" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:03:32.397792  209035 pod_ready.go:40] duration metric: took 1.606323876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:03:32.465585  209035 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:03:32.467622  209035 out.go:179] * Done! kubectl is now configured to use "pause-845581" cluster and "default" namespace by default
	I1206 09:03:29.825908  205517 out.go:252]   - Booting up control plane ...
	I1206 09:03:29.825981  205517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:03:29.826075  205517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:03:29.826675  205517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:03:29.840479  205517 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:03:29.840602  205517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:03:29.849015  205517 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:03:29.849313  205517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:03:29.849359  205517 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:03:29.951699  205517 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:03:29.951860  205517 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:03:30.453368  205517 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.738543ms
	I1206 09:03:30.456433  205517 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:03:30.456601  205517 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8555/livez
	I1206 09:03:30.456740  205517 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:03:30.456859  205517 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:03:29.999120  203960 out.go:252]   - Booting up control plane ...
	I1206 09:03:29.999229  203960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:03:29.999315  203960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:03:29.999422  203960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:03:30.013576  203960 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:03:30.013768  203960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:03:30.021470  203960 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:03:30.021772  203960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:03:30.021833  203960 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:03:30.154046  203960 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:03:30.154218  203960 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:03:31.155164  203960 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001282364s
	I1206 09:03:31.160185  203960 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:03:31.160309  203960 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:03:31.160451  203960 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:03:31.160563  203960 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:03:33.137137  205517 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.680601887s
	I1206 09:03:34.131973  205517 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.675415181s
	I1206 09:03:34.958120  205517 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50155539s
	I1206 09:03:34.977913  205517 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:03:34.988981  205517 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:03:35.001941  205517 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:03:35.002310  205517 kubeadm.go:319] [mark-control-plane] Marking the node cert-options-011599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:03:35.011973  205517 kubeadm.go:319] [bootstrap-token] Using token: hes1x1.cqcy7hlgwt1ngsm8
	W1206 09:03:30.282744  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	W1206 09:03:32.783378  194065 node_ready.go:57] node "offline-crio-829666" has "Ready":"False" status (will retry)
	I1206 09:03:32.956173  203960 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.795850679s
	I1206 09:03:33.698450  203960 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.538287392s
	I1206 09:03:35.664906  203960 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.504571499s
	I1206 09:03:35.684027  203960 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:03:35.695623  203960 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:03:35.706609  203960 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:03:35.706914  203960 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-006207 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:03:35.717187  203960 kubeadm.go:319] [bootstrap-token] Using token: zdyova.id18aq0el1armeav
	I1206 09:03:35.013767  205517 out.go:252]   - Configuring RBAC rules ...
	I1206 09:03:35.013887  205517 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:03:35.017718  205517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:03:35.035751  205517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:03:35.040483  205517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:03:35.043662  205517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:03:35.047085  205517 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:03:35.364566  205517 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:03:35.782009  205517 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:03:36.364643  205517 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:03:36.365738  205517 kubeadm.go:319] 
	I1206 09:03:36.365798  205517 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:03:36.365801  205517 kubeadm.go:319] 
	I1206 09:03:36.365869  205517 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:03:36.365872  205517 kubeadm.go:319] 
	I1206 09:03:36.365892  205517 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:03:36.365947  205517 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:03:36.366026  205517 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:03:36.366029  205517 kubeadm.go:319] 
	I1206 09:03:36.366090  205517 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:03:36.366094  205517 kubeadm.go:319] 
	I1206 09:03:36.366160  205517 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:03:36.366168  205517 kubeadm.go:319] 
	I1206 09:03:36.366209  205517 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:03:36.366272  205517 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:03:36.366370  205517 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:03:36.366380  205517 kubeadm.go:319] 
	I1206 09:03:36.366509  205517 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:03:36.366629  205517 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:03:36.366634  205517 kubeadm.go:319] 
	I1206 09:03:36.366744  205517 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8555 --token hes1x1.cqcy7hlgwt1ngsm8 \
	I1206 09:03:36.366888  205517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:03:36.366907  205517 kubeadm.go:319] 	--control-plane 
	I1206 09:03:36.366909  205517 kubeadm.go:319] 
	I1206 09:03:36.367011  205517 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:03:36.367017  205517 kubeadm.go:319] 
	I1206 09:03:36.367096  205517 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8555 --token hes1x1.cqcy7hlgwt1ngsm8 \
	I1206 09:03:36.367215  205517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:03:36.370448  205517 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:03:36.370609  205517 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:03:36.370641  205517 cni.go:84] Creating CNI manager for ""
	I1206 09:03:36.370653  205517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:36.372258  205517 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:03:36.373437  205517 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:03:36.378235  205517 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:03:36.378245  205517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:03:36.393783  205517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:03:36.643814  205517 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:03:36.643873  205517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:03:36.643880  205517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-011599 minikube.k8s.io/updated_at=2025_12_06T09_03_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=cert-options-011599 minikube.k8s.io/primary=true
	I1206 09:03:36.657396  205517 ops.go:34] apiserver oom_adj: -16
	I1206 09:03:36.731136  205517 kubeadm.go:1114] duration metric: took 87.332776ms to wait for elevateKubeSystemPrivileges
	I1206 09:03:36.743309  205517 kubeadm.go:403] duration metric: took 10.204165594s to StartCluster
	I1206 09:03:36.743335  205517 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:36.743422  205517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:03:36.744921  205517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:36.745188  205517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:03:36.745184  205517 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:03:36.745289  205517 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:03:36.745377  205517 addons.go:70] Setting storage-provisioner=true in profile "cert-options-011599"
	I1206 09:03:36.745403  205517 config.go:182] Loaded profile config "cert-options-011599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:36.745406  205517 addons.go:239] Setting addon storage-provisioner=true in "cert-options-011599"
	I1206 09:03:36.745437  205517 host.go:66] Checking if "cert-options-011599" exists ...
	I1206 09:03:36.745443  205517 addons.go:70] Setting default-storageclass=true in profile "cert-options-011599"
	I1206 09:03:36.745456  205517 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-options-011599"
	I1206 09:03:36.745761  205517 cli_runner.go:164] Run: docker container inspect cert-options-011599 --format={{.State.Status}}
	I1206 09:03:36.745917  205517 cli_runner.go:164] Run: docker container inspect cert-options-011599 --format={{.State.Status}}
	I1206 09:03:36.746844  205517 out.go:179] * Verifying Kubernetes components...
	I1206 09:03:36.751531  205517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:36.769329  205517 addons.go:239] Setting addon default-storageclass=true in "cert-options-011599"
	I1206 09:03:36.769372  205517 host.go:66] Checking if "cert-options-011599" exists ...
	I1206 09:03:36.769843  205517 cli_runner.go:164] Run: docker container inspect cert-options-011599 --format={{.State.Status}}
	I1206 09:03:36.770472  205517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:03:35.718652  203960 out.go:252]   - Configuring RBAC rules ...
	I1206 09:03:35.718817  203960 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:03:35.722704  203960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:03:35.728877  203960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:03:35.733327  203960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:03:35.737059  203960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:03:35.743182  203960 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:03:36.071449  203960 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:03:36.493549  203960 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:03:37.072833  203960 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:03:37.073834  203960 kubeadm.go:319] 
	I1206 09:03:37.073950  203960 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:03:37.073970  203960 kubeadm.go:319] 
	I1206 09:03:37.074088  203960 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:03:37.074093  203960 kubeadm.go:319] 
	I1206 09:03:37.074125  203960 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:03:37.074209  203960 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:03:37.074257  203960 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:03:37.074260  203960 kubeadm.go:319] 
	I1206 09:03:37.074301  203960 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:03:37.074303  203960 kubeadm.go:319] 
	I1206 09:03:37.074344  203960 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:03:37.074346  203960 kubeadm.go:319] 
	I1206 09:03:37.074392  203960 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:03:37.074461  203960 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:03:37.074524  203960 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:03:37.074527  203960 kubeadm.go:319] 
	I1206 09:03:37.074591  203960 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:03:37.074674  203960 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:03:37.074676  203960 kubeadm.go:319] 
	I1206 09:03:37.074780  203960 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zdyova.id18aq0el1armeav \
	I1206 09:03:37.074903  203960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:03:37.074918  203960 kubeadm.go:319] 	--control-plane 
	I1206 09:03:37.074921  203960 kubeadm.go:319] 
	I1206 09:03:37.075022  203960 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:03:37.075024  203960 kubeadm.go:319] 
	I1206 09:03:37.075091  203960 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zdyova.id18aq0el1armeav \
	I1206 09:03:37.075180  203960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:03:37.078431  203960 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:03:37.078580  203960 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:03:37.078599  203960 cni.go:84] Creating CNI manager for ""
	I1206 09:03:37.078608  203960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:03:37.081167  203960 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:03:36.771768  205517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:03:36.771778  205517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:03:36.771830  205517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-011599
	I1206 09:03:36.796508  205517 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:03:36.796522  205517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:03:36.796583  205517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-011599
	I1206 09:03:36.800655  205517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/cert-options-011599/id_rsa Username:docker}
	I1206 09:03:36.826120  205517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/cert-options-011599/id_rsa Username:docker}
	I1206 09:03:36.847313  205517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:03:36.900368  205517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:03:36.918787  205517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:03:36.942683  205517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:03:37.033468  205517 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:03:37.034642  205517 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:03:37.034684  205517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:03:37.265911  205517 api_server.go:72] duration metric: took 520.700922ms to wait for apiserver process to appear ...
	I1206 09:03:37.265929  205517 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:03:37.266010  205517 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8555/healthz ...
	I1206 09:03:37.272146  205517 api_server.go:279] https://192.168.94.2:8555/healthz returned 200:
	ok
	I1206 09:03:37.273096  205517 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:03:37.273105  205517 api_server.go:141] control plane version: v1.34.2
	I1206 09:03:37.273124  205517 api_server.go:131] duration metric: took 7.18939ms to wait for apiserver health ...
	I1206 09:03:37.273133  205517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:03:37.082445  203960 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:03:37.087446  203960 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:03:37.087455  203960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:03:37.103260  203960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:03:37.357014  203960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:03:37.357104  203960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:03:37.357217  203960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-006207 minikube.k8s.io/updated_at=2025_12_06T09_03_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=cert-expiration-006207 minikube.k8s.io/primary=true
	I1206 09:03:37.368838  203960 ops.go:34] apiserver oom_adj: -16
	I1206 09:03:37.441227  203960 kubeadm.go:1114] duration metric: took 84.198617ms to wait for elevateKubeSystemPrivileges
	I1206 09:03:37.461600  203960 kubeadm.go:403] duration metric: took 12.536786883s to StartCluster
	I1206 09:03:37.461638  203960 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:37.461724  203960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:03:37.463769  203960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:03:37.464034  203960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:03:37.464039  203960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:03:37.464105  203960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:03:37.464208  203960 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-006207"
	I1206 09:03:37.464225  203960 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-006207"
	I1206 09:03:37.464237  203960 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-006207"
	I1206 09:03:37.464252  203960 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-006207"
	I1206 09:03:37.464255  203960 host.go:66] Checking if "cert-expiration-006207" exists ...
	I1206 09:03:37.464288  203960 config.go:182] Loaded profile config "cert-expiration-006207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:03:37.464679  203960 cli_runner.go:164] Run: docker container inspect cert-expiration-006207 --format={{.State.Status}}
	I1206 09:03:37.464802  203960 cli_runner.go:164] Run: docker container inspect cert-expiration-006207 --format={{.State.Status}}
	I1206 09:03:37.466408  203960 out.go:179] * Verifying Kubernetes components...
	I1206 09:03:37.467930  203960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:03:37.489938  203960 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:03:37.274336  205517 addons.go:530] duration metric: took 529.044461ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:03:37.276476  205517 system_pods.go:59] 5 kube-system pods found
	I1206 09:03:37.276497  205517 system_pods.go:61] "etcd-cert-options-011599" [9fb88f5c-6afd-47ef-89da-b3d1574cf1ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:03:37.276507  205517 system_pods.go:61] "kube-apiserver-cert-options-011599" [678a4b20-0636-47a1-8f4e-2fb591e67c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:03:37.276515  205517 system_pods.go:61] "kube-controller-manager-cert-options-011599" [d259aa29-2b5a-49fc-a849-6768783db099] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:03:37.276521  205517 system_pods.go:61] "kube-scheduler-cert-options-011599" [04b5f37a-1414-4bc4-aa0d-2a780b579b37] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:03:37.276533  205517 system_pods.go:61] "storage-provisioner" [44428c61-feeb-48c9-b674-3a6cfae0bb50] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:03:37.276539  205517 system_pods.go:74] duration metric: took 3.400916ms to wait for pod list to return data ...
	I1206 09:03:37.276550  205517 kubeadm.go:587] duration metric: took 531.34397ms to wait for: map[apiserver:true system_pods:true]
	I1206 09:03:37.276562  205517 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:03:37.279231  205517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:03:37.279244  205517 node_conditions.go:123] node cpu capacity is 8
	I1206 09:03:37.279255  205517 node_conditions.go:105] duration metric: took 2.689954ms to run NodePressure ...
	I1206 09:03:37.279264  205517 start.go:242] waiting for startup goroutines ...
	I1206 09:03:37.538430  205517 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-options-011599" context rescaled to 1 replicas
	I1206 09:03:37.538459  205517 start.go:247] waiting for cluster config update ...
	I1206 09:03:37.538471  205517 start.go:256] writing updated cluster config ...
	I1206 09:03:37.538878  205517 ssh_runner.go:195] Run: rm -f paused
	I1206 09:03:37.612577  205517 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:03:37.614534  205517 out.go:179] * Done! kubectl is now configured to use "cert-options-011599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.317577035Z" level=info msg="RDT not available in the host system"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.317590776Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.318544442Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.318564969Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.318579792Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.319327931Z" level=info msg="Conmon does support the --sync option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.319344125Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.323185616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.323210419Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.323686081Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.324052556Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.324099811Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.407556301Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-txc4m Namespace:kube-system ID:4b8212f14913a8c6169c35eadea57f03d5d4339736e2e4f0e1f4f74cd770ec3d UID:b6adb37f-ed13-4c7b-b443-6c918b25c752 NetNS:/var/run/netns/b8e7c52a-861d-44df-b1fe-7d85c0e29bfd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001326c8}] Aliases:map[]}"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.407744555Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-txc4m for CNI network kindnet (type=ptp)"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408192812Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408217786Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408274311Z" level=info msg="Create NRI interface"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408401307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408416655Z" level=info msg="runtime interface created"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408429482Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408436854Z" level=info msg="runtime interface starting up..."
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408442342Z" level=info msg="starting plugins..."
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.408454738Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 09:03:29 pause-845581 crio[2168]: time="2025-12-06T09:03:29.40883244Z" level=info msg="No systemd watchdog enabled"
	Dec 06 09:03:29 pause-845581 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	aaca6c4c6a8d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   4b8212f14913a       coredns-66bc5c9577-txc4m               kube-system
	eec9250853c2c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   25 seconds ago      Running             kube-proxy                0                   3c8b30dc207a9       kube-proxy-qw24c                       kube-system
	1cd69e4c42603       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   d42009f496a5e       kindnet-z5h5d                          kube-system
	f2ceef154e22a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   36 seconds ago      Running             etcd                      0                   80bd18230ba40       etcd-pause-845581                      kube-system
	83bc849744de0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   36 seconds ago      Running             kube-apiserver            0                   7a58e8f1803b7       kube-apiserver-pause-845581            kube-system
	3d999efa25cdc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   36 seconds ago      Running             kube-scheduler            0                   43ff2144e2164       kube-scheduler-pause-845581            kube-system
	0d73f16cec903       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   36 seconds ago      Running             kube-controller-manager   0                   5ff665d45893a       kube-controller-manager-pause-845581   kube-system
	
	
	==> coredns [aaca6c4c6a8d78096304ba23ae8765340339c98d5b9dbb278c3b356cb82203f8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50891 - 19353 "HINFO IN 6307950807946082642.8606806498488255531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032217575s
	
	
	==> describe nodes <==
	Name:               pause-845581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-845581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=pause-845581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_03_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-845581
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:03:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:03:23 +0000   Sat, 06 Dec 2025 09:03:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-845581
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c5de0118-fe74-4283-96bb-752ca8539259
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-txc4m                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-845581                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-z5h5d                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-845581             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-845581    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qw24c                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-845581             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-845581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-845581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-845581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-845581 event: Registered Node pause-845581 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-845581 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [f2ceef154e22acc6d2c2e75a1eba5b6237a4b72eaa0cc6d3cfb7e2403be267aa] <==
	{"level":"warn","ts":"2025-12-06T09:03:03.518923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.526391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.536752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.548302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.556555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.566341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.575604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.587822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.597856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.610799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.622068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.636950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:03:03.736606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59694","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:03:16.304307Z","caller":"traceutil/trace.go:172","msg":"trace[284539081] linearizableReadLoop","detail":"{readStateIndex:391; appliedIndex:391; }","duration":"112.373455ms","start":"2025-12-06T09:03:16.191903Z","end":"2025-12-06T09:03:16.304276Z","steps":["trace[284539081] 'read index received'  (duration: 112.364516ms)","trace[284539081] 'applied index is now lower than readState.Index'  (duration: 7.661µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:03:16.304452Z","caller":"traceutil/trace.go:172","msg":"trace[608844041] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"138.112053ms","start":"2025-12-06T09:03:16.166325Z","end":"2025-12-06T09:03:16.304437Z","steps":["trace[608844041] 'process raft request'  (duration: 137.992209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:03:16.304533Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.617117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:03:16.306356Z","caller":"traceutil/trace.go:172","msg":"trace[1312212085] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:379; }","duration":"114.449266ms","start":"2025-12-06T09:03:16.191892Z","end":"2025-12-06T09:03:16.306341Z","steps":["trace[1312212085] 'agreement among raft nodes before linearized reading'  (duration: 112.596857ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:03:16.544211Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.97399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-845581\" limit:1 ","response":"range_response_count:1 size:5997"}
	{"level":"info","ts":"2025-12-06T09:03:16.544282Z","caller":"traceutil/trace.go:172","msg":"trace[1397624417] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-845581; range_end:; response_count:1; response_revision:379; }","duration":"131.057799ms","start":"2025-12-06T09:03:16.413210Z","end":"2025-12-06T09:03:16.544268Z","steps":["trace[1397624417] 'agreement among raft nodes before linearized reading'  (duration: 25.406879ms)","trace[1397624417] 'range keys from in-memory index tree'  (duration: 105.525047ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:03:16.544774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.601804ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790495131003811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.103.2\" mod_revision:204 > success:<request_put:<key:\"/registry/masterleases/192.168.103.2\" value_size:66 lease:4650418458276228000 >> failure:<request_range:<key:\"/registry/masterleases/192.168.103.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:03:16.544844Z","caller":"traceutil/trace.go:172","msg":"trace[1273427823] linearizableReadLoop","detail":"{readStateIndex:394; appliedIndex:393; }","duration":"101.969036ms","start":"2025-12-06T09:03:16.442865Z","end":"2025-12-06T09:03:16.544834Z","steps":["trace[1273427823] 'read index received'  (duration: 25.822µs)","trace[1273427823] 'applied index is now lower than readState.Index'  (duration: 101.942641ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:03:16.544889Z","caller":"traceutil/trace.go:172","msg":"trace[1500732150] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"147.590968ms","start":"2025-12-06T09:03:16.397275Z","end":"2025-12-06T09:03:16.544866Z","steps":["trace[1500732150] 'process raft request'  (duration: 41.369196ms)","trace[1500732150] 'compare'  (duration: 105.505322ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:03:16.544959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.094534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-845581\" limit:1 ","response":"range_response_count:1 size:5560"}
	{"level":"info","ts":"2025-12-06T09:03:16.545001Z","caller":"traceutil/trace.go:172","msg":"trace[403124762] range","detail":"{range_begin:/registry/minions/pause-845581; range_end:; response_count:1; response_revision:380; }","duration":"102.126118ms","start":"2025-12-06T09:03:16.442855Z","end":"2025-12-06T09:03:16.544981Z","steps":["trace[403124762] 'agreement among raft nodes before linearized reading'  (duration: 102.018169ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:03:16.652248Z","caller":"traceutil/trace.go:172","msg":"trace[1236967041] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"100.322285ms","start":"2025-12-06T09:03:16.551906Z","end":"2025-12-06T09:03:16.652229Z","steps":["trace[1236967041] 'process raft request'  (duration: 94.994475ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:03:38 up 46 min,  0 user,  load average: 2.90, 1.74, 1.31
	Linux pause-845581 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cd69e4c42603f6f424b431e36bb72a52f823339198846f95c3b3c5480f81252] <==
	I1206 09:03:13.049175       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:03:13.049435       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:03:13.049583       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:03:13.049609       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:03:13.049637       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:03:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:03:13.345861       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:03:13.346112       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:03:13.346132       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:03:13.346281       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:03:13.743458       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:03:13.743585       1 metrics.go:72] Registering metrics
	I1206 09:03:13.743682       1 controller.go:711] "Syncing nftables rules"
	I1206 09:03:23.352116       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:03:23.352185       1 main.go:301] handling current node
	I1206 09:03:33.354073       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:03:33.354103       1 main.go:301] handling current node
	
	
	==> kube-apiserver [83bc849744de03965195574dcc91d751d16f46d6a16b015504af6a1c68b187ce] <==
	I1206 09:03:04.490815       1 policy_source.go:240] refreshing policies
	I1206 09:03:04.491414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:03:04.497874       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:04.498827       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:03:04.508254       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:04.510832       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:03:04.546365       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:03:04.564274       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:03:05.368352       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:03:05.373026       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:03:05.373043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:03:06.093409       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:03:06.158945       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:03:06.293714       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:03:06.306847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1206 09:03:06.308675       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:03:06.313814       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:03:06.520298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:03:07.388470       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:03:07.403255       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:03:07.424939       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:03:12.216290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:12.221091       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:03:12.509442       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1206 09:03:12.611976       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d73f16cec903605672b7f5eba71cbb655f0cdf983fd6f72c5726ef836f26233] <==
	I1206 09:03:11.506503       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:03:11.506511       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:03:11.506513       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:03:11.506489       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:03:11.506894       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:03:11.507032       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:03:11.507166       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:03:11.507541       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:03:11.507693       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:03:11.510805       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:03:11.510856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:03:11.511119       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:03:11.511184       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:03:11.511246       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:03:11.511256       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:03:11.511262       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:03:11.512142       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:03:11.513194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:03:11.513781       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:03:11.516530       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:03:11.518093       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-845581" podCIDRs=["10.244.0.0/24"]
	I1206 09:03:11.522184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:03:11.528347       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:03:11.531608       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:03:26.482937       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [eec9250853c2ca57aebb1d21db7963b81c996c5b22fb94e27561aaf119dad7d0] <==
	I1206 09:03:12.948980       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:03:13.017602       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:03:13.118012       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:03:13.118051       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:03:13.118137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:03:13.140094       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:03:13.140185       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:03:13.147300       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:03:13.147939       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:03:13.147972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:03:13.149536       1 config.go:200] "Starting service config controller"
	I1206 09:03:13.149558       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:03:13.149591       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:03:13.149598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:03:13.149615       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:03:13.149629       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:03:13.149931       1 config.go:309] "Starting node config controller"
	I1206 09:03:13.150002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:03:13.150036       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:03:13.250579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:03:13.250624       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:03:13.250603       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3d999efa25cdcc902bdcb270fd87b1c9cc14154168b76a167a9a70ad5a7c81e3] <==
	E1206 09:03:04.527770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:03:04.527826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:03:04.527873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:03:04.527951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:03:04.528165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:03:04.528257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:03:04.528257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:03:04.528318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:03:04.528368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:03:04.528396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:03:04.528438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:03:04.531921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:03:04.533349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:03:05.381730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:03:05.383589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:03:05.389532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:03:05.430014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:03:05.508728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:03:05.542379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:03:05.563654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:03:05.608266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:03:05.648497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:03:05.669352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:03:05.787725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:03:07.515298       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:03:08 pause-845581 kubelet[1310]: E1206 09:03:08.411532    1310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-845581\" already exists" pod="kube-system/kube-scheduler-pause-845581"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.445581    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-845581" podStartSLOduration=2.445548424 podStartE2EDuration="2.445548424s" podCreationTimestamp="2025-12-06 09:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.428539462 +0000 UTC m=+1.251470457" watchObservedRunningTime="2025-12-06 09:03:08.445548424 +0000 UTC m=+1.268479427"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.485319    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-845581" podStartSLOduration=1.485297412 podStartE2EDuration="1.485297412s" podCreationTimestamp="2025-12-06 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.454641873 +0000 UTC m=+1.277572874" watchObservedRunningTime="2025-12-06 09:03:08.485297412 +0000 UTC m=+1.308228415"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.485487    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-845581" podStartSLOduration=3.485478736 podStartE2EDuration="3.485478736s" podCreationTimestamp="2025-12-06 09:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.48359497 +0000 UTC m=+1.306525973" watchObservedRunningTime="2025-12-06 09:03:08.485478736 +0000 UTC m=+1.308409739"
	Dec 06 09:03:08 pause-845581 kubelet[1310]: I1206 09:03:08.519628    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-845581" podStartSLOduration=1.519607593 podStartE2EDuration="1.519607593s" podCreationTimestamp="2025-12-06 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:08.502244884 +0000 UTC m=+1.325175903" watchObservedRunningTime="2025-12-06 09:03:08.519607593 +0000 UTC m=+1.342538598"
	Dec 06 09:03:11 pause-845581 kubelet[1310]: I1206 09:03:11.588069    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:03:11 pause-845581 kubelet[1310]: I1206 09:03:11.588807    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550397    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b45c72b9-d95b-4226-a8f5-e4c45609d742-cni-cfg\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550448    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b45c72b9-d95b-4226-a8f5-e4c45609d742-lib-modules\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550473    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjlj\" (UniqueName: \"kubernetes.io/projected/b45c72b9-d95b-4226-a8f5-e4c45609d742-kube-api-access-xnjlj\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550507    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e3bfb60-eb08-406f-ba98-8595995bc552-kube-proxy\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550529    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e3bfb60-eb08-406f-ba98-8595995bc552-xtables-lock\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550548    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e3bfb60-eb08-406f-ba98-8595995bc552-lib-modules\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550576    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwzj\" (UniqueName: \"kubernetes.io/projected/6e3bfb60-eb08-406f-ba98-8595995bc552-kube-api-access-knwzj\") pod \"kube-proxy-qw24c\" (UID: \"6e3bfb60-eb08-406f-ba98-8595995bc552\") " pod="kube-system/kube-proxy-qw24c"
	Dec 06 09:03:12 pause-845581 kubelet[1310]: I1206 09:03:12.550604    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b45c72b9-d95b-4226-a8f5-e4c45609d742-xtables-lock\") pod \"kindnet-z5h5d\" (UID: \"b45c72b9-d95b-4226-a8f5-e4c45609d742\") " pod="kube-system/kindnet-z5h5d"
	Dec 06 09:03:13 pause-845581 kubelet[1310]: I1206 09:03:13.418668    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qw24c" podStartSLOduration=1.418317829 podStartE2EDuration="1.418317829s" podCreationTimestamp="2025-12-06 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:13.418058574 +0000 UTC m=+6.240989579" watchObservedRunningTime="2025-12-06 09:03:13.418317829 +0000 UTC m=+6.241248832"
	Dec 06 09:03:14 pause-845581 kubelet[1310]: I1206 09:03:14.134597    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z5h5d" podStartSLOduration=2.134573348 podStartE2EDuration="2.134573348s" podCreationTimestamp="2025-12-06 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:13.447286186 +0000 UTC m=+6.270217192" watchObservedRunningTime="2025-12-06 09:03:14.134573348 +0000 UTC m=+6.957504354"
	Dec 06 09:03:23 pause-845581 kubelet[1310]: I1206 09:03:23.688007    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:03:23 pause-845581 kubelet[1310]: I1206 09:03:23.736884    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkmpk\" (UniqueName: \"kubernetes.io/projected/b6adb37f-ed13-4c7b-b443-6c918b25c752-kube-api-access-lkmpk\") pod \"coredns-66bc5c9577-txc4m\" (UID: \"b6adb37f-ed13-4c7b-b443-6c918b25c752\") " pod="kube-system/coredns-66bc5c9577-txc4m"
	Dec 06 09:03:23 pause-845581 kubelet[1310]: I1206 09:03:23.736947    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6adb37f-ed13-4c7b-b443-6c918b25c752-config-volume\") pod \"coredns-66bc5c9577-txc4m\" (UID: \"b6adb37f-ed13-4c7b-b443-6c918b25c752\") " pod="kube-system/coredns-66bc5c9577-txc4m"
	Dec 06 09:03:24 pause-845581 kubelet[1310]: I1206 09:03:24.443218    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-txc4m" podStartSLOduration=12.443199396 podStartE2EDuration="12.443199396s" podCreationTimestamp="2025-12-06 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:03:24.443081523 +0000 UTC m=+17.266012526" watchObservedRunningTime="2025-12-06 09:03:24.443199396 +0000 UTC m=+17.266130402"
	Dec 06 09:03:33 pause-845581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:03:33 pause-845581 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:03:33 pause-845581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:03:33 pause-845581 systemd[1]: kubelet.service: Consumed 1.198s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-845581 -n pause-845581
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-845581 -n pause-845581: exit status 2 (343.137907ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-845581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.006947ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:07:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-322324 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-322324 describe deploy/metrics-server -n kube-system: exit status 1 (64.421065ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-322324 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-322324
helpers_test.go:243: (dbg) docker inspect old-k8s-version-322324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f",
	        "Created": "2025-12-06T09:07:01.784357575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251756,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:07:01.839752763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/hosts",
	        "LogPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f-json.log",
	        "Name": "/old-k8s-version-322324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-322324:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-322324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f",
	                "LowerDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-322324",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-322324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-322324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-322324",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-322324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a3d1c24568be164da6116f2dacd8233deb5b963f6a0dd8235bd896b9f2a0fbce",
	            "SandboxKey": "/var/run/docker/netns/a3d1c24568be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-322324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6aeaf0351aa11b8b99e15f127f61cc1457ec80dfb36963930d49a8cf393d88b",
	                    "EndpointID": "86d75603a04edf6c5d35c2aea380844f04c62100ae986f8ed0ead4bfc7439600",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d2:b8:02:a6:66:d2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-322324",
	                        "7e0820bc743c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-322324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-322324 logs -n 25: (1.111937981s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-646473 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo docker system info                                                                                                                                                                                                      │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo containerd config dump                                                                                                                                                                                                  │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo crio config                                                                                                                                                                                                             │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ delete  │ -p cilium-646473                                                                                                                                                                                                                              │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:06 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ stop    │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p NoKubernetes-328079 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ -p NoKubernetes-328079 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ delete  │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733      │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-322324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:07:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:07:10.709356  255989 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:10.709447  255989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:10.709453  255989 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:10.709458  255989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:10.709680  255989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:07:10.710136  255989 out.go:368] Setting JSON to false
	I1206 09:07:10.711365  255989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2982,"bootTime":1765009049,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:07:10.711435  255989 start.go:143] virtualization: kvm guest
	I1206 09:07:10.714788  255989 out.go:179] * [no-preload-769733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:07:10.716499  255989 notify.go:221] Checking for updates...
	I1206 09:07:10.716689  255989 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:10.718341  255989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:10.719785  255989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:07:10.721101  255989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:07:10.722304  255989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:07:10.723594  255989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:10.725821  255989 config.go:182] Loaded profile config "kubernetes-upgrade-702638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:10.725979  255989 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:07:10.726135  255989 config.go:182] Loaded profile config "stopped-upgrade-454433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:07:10.726294  255989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:10.754235  255989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:07:10.754366  255989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:10.829452  255989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:07:10.818229436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:07:10.829558  255989 docker.go:319] overlay module found
	I1206 09:07:10.831340  255989 out.go:179] * Using the docker driver based on user configuration
	I1206 09:07:10.832660  255989 start.go:309] selected driver: docker
	I1206 09:07:10.832678  255989 start.go:927] validating driver "docker" against <nil>
	I1206 09:07:10.832692  255989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:10.833461  255989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:10.898785  255989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:07:10.887610946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:07:10.898982  255989 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:07:10.899263  255989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:07:10.902086  255989 out.go:179] * Using Docker driver with root privileges
	I1206 09:07:10.903251  255989 cni.go:84] Creating CNI manager for ""
	I1206 09:07:10.903323  255989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:10.903337  255989 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:07:10.903436  255989 start.go:353] cluster config:
	{Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:10.904697  255989 out.go:179] * Starting "no-preload-769733" primary control-plane node in "no-preload-769733" cluster
	I1206 09:07:10.905800  255989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:07:10.906910  255989 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:07:10.908038  255989 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:07:10.908117  255989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:07:10.908184  255989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/config.json ...
	I1206 09:07:10.908219  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/config.json: {Name:mk1cb5931b5ab0f876560fa78618e8bbf5d2b987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:10.908399  255989 cache.go:107] acquiring lock: {Name:mk3ec8e7f3239e63a4579f339a0b167cd40d12bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908408  255989 cache.go:107] acquiring lock: {Name:mk80da841620836604a4fb28eae69f74c14650a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908426  255989 cache.go:107] acquiring lock: {Name:mk00ae7798d573847547213a6282bfb842af8cd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908521  255989 cache.go:115] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1206 09:07:10.908524  255989 cache.go:107] acquiring lock: {Name:mk73f8905845e61a1676a39e5cfb18e7706db084 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908549  255989 cache.go:107] acquiring lock: {Name:mkbac531e41cac0c4d7d33feda6ddd5a2ba806cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908570  255989 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:10.908602  255989 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:10.908633  255989 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:10.908659  255989 cache.go:107] acquiring lock: {Name:mk53305a921f4ea2ac8a27c83edbdce617400bb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908708  255989 cache.go:115] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 09:07:10.908717  255989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 61.935µs
	I1206 09:07:10.908738  255989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 09:07:10.908751  255989 cache.go:107] acquiring lock: {Name:mke25c95d56fddc4c4597d3d7e7c1bb342b9d6b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908532  255989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 110.991µs
	I1206 09:07:10.908812  255989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 09:07:10.908807  255989 cache.go:107] acquiring lock: {Name:mk68870c832ef1623cfb9db003338cadec0ed3ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908824  255989 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:10.908897  255989 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:10.909007  255989 cache.go:115] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 09:07:10.909021  255989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 627.944µs
	I1206 09:07:10.909029  255989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 09:07:10.910128  255989 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:10.910136  255989 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:10.910139  255989 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:10.910210  255989 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:10.910850  255989 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:10.931415  255989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:07:10.931444  255989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:07:10.931458  255989 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:07:10.931484  255989 start.go:360] acquireMachinesLock for no-preload-769733: {Name:mke00f2a24f1a50a1bc4fbc79c0044e9888e3bc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.931588  255989 start.go:364] duration metric: took 87.679µs to acquireMachinesLock for "no-preload-769733"
	I1206 09:07:10.931620  255989 start.go:93] Provisioning new machine with config: &{Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:07:10.931688  255989 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:07:05.958126  249953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:07:05.975441  249953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:07:05.992598  249953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:07:06.005053  249953 ssh_runner.go:195] Run: openssl version
	I1206 09:07:06.011129  249953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.018313  249953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:07:06.025307  249953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.028835  249953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.028885  249953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.063980  249953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:07:06.071649  249953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:07:06.079331  249953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.086388  249953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:07:06.093714  249953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.098066  249953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.098140  249953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.133370  249953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:07:06.141269  249953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:07:06.149116  249953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.156660  249953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:07:06.164113  249953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.167809  249953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.167857  249953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.208010  249953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:06.216384  249953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:06.224913  249953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:07:06.229486  249953 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:07:06.229544  249953 kubeadm.go:401] StartCluster: {Name:old-k8s-version-322324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-322324 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:06.229628  249953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:07:06.229693  249953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:07:06.261098  249953 cri.go:89] found id: ""
	I1206 09:07:06.261166  249953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:07:06.269664  249953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:07:06.277900  249953 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:07:06.277956  249953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:07:06.285702  249953 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:07:06.285724  249953 kubeadm.go:158] found existing configuration files:
	
	I1206 09:07:06.285768  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:07:06.294055  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:07:06.294129  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:07:06.302205  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:07:06.310953  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:07:06.311029  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:07:06.319190  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:07:06.326946  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:07:06.327021  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:07:06.334703  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:07:06.342686  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:07:06.342761  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:07:06.350939  249953 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:07:06.442484  249953 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:07:06.535383  249953 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:07:06.485916  222653 cri.go:89] found id: ""
	I1206 09:07:06.485944  222653 logs.go:282] 0 containers: []
	W1206 09:07:06.485954  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:06.485961  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:06.486062  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:06.524379  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:06.524418  222653 cri.go:89] found id: ""
	I1206 09:07:06.524429  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:06.524487  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:06.528543  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:06.528608  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:06.571768  222653 cri.go:89] found id: ""
	I1206 09:07:06.571789  222653 logs.go:282] 0 containers: []
	W1206 09:07:06.571796  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:06.571802  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:06.571861  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:06.618439  222653 cri.go:89] found id: ""
	I1206 09:07:06.618465  222653 logs.go:282] 0 containers: []
	W1206 09:07:06.618475  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:06.618486  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:06.618505  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:06.637866  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:06.637903  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:06.712930  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:06.712955  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:06.712971  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:06.755170  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:06.755199  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:06.830461  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:06.830540  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:06.870197  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:06.870225  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:06.928360  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:06.928393  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:06.971420  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:06.971455  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:09.589052  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:09.589521  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:09.589592  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:09.589650  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:09.630787  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:09.630815  222653 cri.go:89] found id: ""
	I1206 09:07:09.630825  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:09.630881  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.634973  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:09.635047  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:09.674953  222653 cri.go:89] found id: ""
	I1206 09:07:09.674983  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.675020  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:09.675029  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:09.675093  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:09.715329  222653 cri.go:89] found id: ""
	I1206 09:07:09.715357  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.715373  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:09.715381  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:09.715438  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:09.756013  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:09.756033  222653 cri.go:89] found id: ""
	I1206 09:07:09.756042  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:09.756105  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.760380  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:09.760448  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:09.807677  222653 cri.go:89] found id: ""
	I1206 09:07:09.807709  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.807721  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:09.807729  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:09.807786  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:09.852520  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:09.852546  222653 cri.go:89] found id: ""
	I1206 09:07:09.852556  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:09.852612  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.856776  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:09.856838  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:09.894073  222653 cri.go:89] found id: ""
	I1206 09:07:09.894098  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.894108  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:09.894115  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:09.894176  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:09.928380  222653 cri.go:89] found id: ""
	I1206 09:07:09.928416  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.928426  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:09.928437  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:09.928455  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:10.024128  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:10.024160  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:10.041862  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:10.041890  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:10.104944  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:10.104967  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:10.104982  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:10.147088  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:10.147126  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:10.223802  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:10.223840  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:10.264387  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:10.264415  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:10.307815  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:10.307846  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:09.571042  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:09.571510  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:09.571578  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:09.571641  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:09.601395  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:09.601419  224160 cri.go:89] found id: ""
	I1206 09:07:09.601429  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:09.601484  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.605751  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:09.605820  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:09.635504  224160 cri.go:89] found id: ""
	I1206 09:07:09.635536  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.635546  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:09.635553  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:09.635604  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:09.664997  224160 cri.go:89] found id: ""
	I1206 09:07:09.665024  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.665037  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:09.665044  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:09.665102  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:09.695837  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:09.695855  224160 cri.go:89] found id: ""
	I1206 09:07:09.695862  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:09.695908  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.700576  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:09.700646  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:09.734255  224160 cri.go:89] found id: ""
	I1206 09:07:09.734282  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.734292  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:09.734300  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:09.734372  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:09.767137  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:09.767159  224160 cri.go:89] found id: ""
	I1206 09:07:09.767169  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:09.767316  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.772305  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:09.772383  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:09.815271  224160 cri.go:89] found id: ""
	I1206 09:07:09.815295  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.815307  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:09.815315  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:09.815392  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:09.845233  224160 cri.go:89] found id: ""
	I1206 09:07:09.845261  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.845273  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:09.845283  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:09.845295  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:09.955042  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:09.955071  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:09.968817  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:09.968840  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:10.025635  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:10.025656  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:10.025672  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:10.059181  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:10.059207  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:10.088126  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:10.088164  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:10.115652  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:10.115675  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:10.174448  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:10.174492  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:10.934340  255989 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:07:10.934550  255989 start.go:159] libmachine.API.Create for "no-preload-769733" (driver="docker")
	I1206 09:07:10.934581  255989 client.go:173] LocalClient.Create starting
	I1206 09:07:10.934649  255989 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:07:10.934683  255989 main.go:143] libmachine: Decoding PEM data...
	I1206 09:07:10.934702  255989 main.go:143] libmachine: Parsing certificate...
	I1206 09:07:10.934756  255989 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:07:10.934781  255989 main.go:143] libmachine: Decoding PEM data...
	I1206 09:07:10.934793  255989 main.go:143] libmachine: Parsing certificate...
	I1206 09:07:10.935168  255989 cli_runner.go:164] Run: docker network inspect no-preload-769733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:07:10.954867  255989 cli_runner.go:211] docker network inspect no-preload-769733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:07:10.954956  255989 network_create.go:284] running [docker network inspect no-preload-769733] to gather additional debugging logs...
	I1206 09:07:10.954979  255989 cli_runner.go:164] Run: docker network inspect no-preload-769733
	W1206 09:07:10.972559  255989 cli_runner.go:211] docker network inspect no-preload-769733 returned with exit code 1
	I1206 09:07:10.972585  255989 network_create.go:287] error running [docker network inspect no-preload-769733]: docker network inspect no-preload-769733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-769733 not found
	I1206 09:07:10.972604  255989 network_create.go:289] output of [docker network inspect no-preload-769733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-769733 not found
	
	** /stderr **
	I1206 09:07:10.972688  255989 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:07:10.992148  255989 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:07:10.992902  255989 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:07:10.993639  255989 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:07:10.994174  255989 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f6aeaf0351aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:f6:31:65:11:00} reservation:<nil>}
	I1206 09:07:10.994572  255989 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6a656c6b5a08 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:de:88:d9:0b:15} reservation:<nil>}
	I1206 09:07:10.995179  255989 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06c80}
	I1206 09:07:10.995205  255989 network_create.go:124] attempt to create docker network no-preload-769733 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:07:10.995259  255989 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-769733 no-preload-769733
	I1206 09:07:11.048530  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:07:11.049374  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:11.050720  255989 network_create.go:108] docker network no-preload-769733 192.168.94.0/24 created
	I1206 09:07:11.050749  255989 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-769733" container
	I1206 09:07:11.050814  255989 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:07:11.052113  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:11.063376  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:11.072166  255989 cli_runner.go:164] Run: docker volume create no-preload-769733 --label name.minikube.sigs.k8s.io=no-preload-769733 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:07:11.091902  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:07:11.092807  255989 oci.go:103] Successfully created a docker volume no-preload-769733
	I1206 09:07:11.092871  255989 cli_runner.go:164] Run: docker run --rm --name no-preload-769733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-769733 --entrypoint /usr/bin/test -v no-preload-769733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:07:11.509857  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 09:07:11.509888  255989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 601.084495ms
	I1206 09:07:11.509901  255989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 09:07:11.553024  255989 oci.go:107] Successfully prepared a docker volume no-preload-769733
	I1206 09:07:11.553072  255989 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1206 09:07:11.553158  255989 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:07:11.553193  255989 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:07:11.553248  255989 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:07:11.612419  255989 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-769733 --name no-preload-769733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-769733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-769733 --network no-preload-769733 --ip 192.168.94.2 --volume no-preload-769733:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:07:11.904119  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Running}}
	I1206 09:07:11.926219  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:11.950273  255989 cli_runner.go:164] Run: docker exec no-preload-769733 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:07:11.998160  255989 oci.go:144] the created container "no-preload-769733" has a running status.
	I1206 09:07:11.998193  255989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa...
	I1206 09:07:12.035252  255989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:07:12.069183  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:12.110255  255989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:07:12.110278  255989 kic_runner.go:114] Args: [docker exec --privileged no-preload-769733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:07:12.184227  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:12.214036  255989 machine.go:94] provisionDockerMachine start ...
	I1206 09:07:12.214174  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:12.249194  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:12.249532  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:12.249557  255989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:07:12.250421  255989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46140->127.0.0.1:33063: read: connection reset by peer
	I1206 09:07:12.274681  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 09:07:12.274723  255989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.366170347s
	I1206 09:07:12.274748  255989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 09:07:12.320591  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 09:07:12.320637  255989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.412254947s
	I1206 09:07:12.320659  255989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 09:07:12.328259  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 09:07:12.328297  255989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.419775507s
	I1206 09:07:12.328314  255989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 09:07:12.397930  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 09:07:12.397970  255989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.489218658s
	I1206 09:07:12.398002  255989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 09:07:12.398024  255989 cache.go:87] Successfully saved all images to host disk.
	I1206 09:07:15.378907  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-769733
	
	I1206 09:07:15.378937  255989 ubuntu.go:182] provisioning hostname "no-preload-769733"
	I1206 09:07:15.379012  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.397856  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:15.398133  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:15.398154  255989 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-769733 && echo "no-preload-769733" | sudo tee /etc/hostname
	I1206 09:07:15.534926  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-769733
	
	I1206 09:07:15.535036  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.553256  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:15.553499  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:15.553520  255989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-769733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-769733/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-769733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:07:15.687658  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:07:15.687698  255989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:07:15.687722  255989 ubuntu.go:190] setting up certificates
	I1206 09:07:15.687731  255989 provision.go:84] configureAuth start
	I1206 09:07:15.687787  255989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-769733
	I1206 09:07:15.705634  255989 provision.go:143] copyHostCerts
	I1206 09:07:15.705707  255989 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:07:15.705724  255989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:07:15.705818  255989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:07:15.705933  255989 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:07:15.705949  255989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:07:15.706010  255989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:07:15.706122  255989 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:07:15.706133  255989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:07:15.706169  255989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:07:15.706239  255989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.no-preload-769733 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-769733]
	I1206 09:07:15.999796  249953 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1206 09:07:15.999899  249953 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:07:16.000032  249953 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:07:16.000197  249953 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:07:16.000253  249953 kubeadm.go:319] OS: Linux
	I1206 09:07:16.000313  249953 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:07:16.000374  249953 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:07:16.000444  249953 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:07:16.000511  249953 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:07:16.000624  249953 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:07:16.000693  249953 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:07:16.000760  249953 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:07:16.000839  249953 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:07:16.000930  249953 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:07:16.001073  249953 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:07:16.001184  249953 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 09:07:16.001261  249953 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:07:16.003935  249953 out.go:252]   - Generating certificates and keys ...
	I1206 09:07:16.004046  249953 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:07:16.004143  249953 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:07:16.004240  249953 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:07:16.004336  249953 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:07:16.004434  249953 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:07:16.004503  249953 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:07:16.004582  249953 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:07:16.004759  249953 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-322324] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:07:16.004859  249953 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:07:16.005051  249953 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-322324] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:07:16.005141  249953 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:07:16.005259  249953 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:07:16.005361  249953 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:07:16.005446  249953 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:07:16.005524  249953 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:07:16.005600  249953 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:07:16.005695  249953 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:07:16.005781  249953 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:07:16.005908  249953 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:07:16.006082  249953 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:07:16.007458  249953 out.go:252]   - Booting up control plane ...
	I1206 09:07:16.007579  249953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:07:16.007685  249953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:07:16.007791  249953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:07:16.007951  249953 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:07:16.008159  249953 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:07:16.008236  249953 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:07:16.008491  249953 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 09:07:16.008629  249953 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002384 seconds
	I1206 09:07:16.008764  249953 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:07:16.008923  249953 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:07:16.009039  249953 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:07:16.009314  249953 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-322324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:07:16.009399  249953 kubeadm.go:319] [bootstrap-token] Using token: o8hb1i.ymis9idm9gbc71mk
	I1206 09:07:16.011218  249953 out.go:252]   - Configuring RBAC rules ...
	I1206 09:07:16.011338  249953 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:07:16.011428  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:07:16.011616  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:07:16.011779  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:07:16.011960  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:07:16.012107  249953 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:07:16.012298  249953 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:07:16.012369  249953 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:07:16.012411  249953 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:07:16.012418  249953 kubeadm.go:319] 
	I1206 09:07:16.012479  249953 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:07:16.012488  249953 kubeadm.go:319] 
	I1206 09:07:16.012635  249953 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:07:16.012651  249953 kubeadm.go:319] 
	I1206 09:07:16.012696  249953 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:07:16.012927  249953 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:07:16.013047  249953 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:07:16.013067  249953 kubeadm.go:319] 
	I1206 09:07:16.013170  249953 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:07:16.013180  249953 kubeadm.go:319] 
	I1206 09:07:16.013242  249953 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:07:16.013251  249953 kubeadm.go:319] 
	I1206 09:07:16.013351  249953 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:07:16.013452  249953 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:07:16.013531  249953 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:07:16.013541  249953 kubeadm.go:319] 
	I1206 09:07:16.013653  249953 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:07:16.013746  249953 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:07:16.013755  249953 kubeadm.go:319] 
	I1206 09:07:16.013876  249953 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o8hb1i.ymis9idm9gbc71mk \
	I1206 09:07:16.014056  249953 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:07:16.014091  249953 kubeadm.go:319] 	--control-plane 
	I1206 09:07:16.014097  249953 kubeadm.go:319] 
	I1206 09:07:16.014205  249953 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:07:16.014220  249953 kubeadm.go:319] 
	I1206 09:07:16.014349  249953 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o8hb1i.ymis9idm9gbc71mk \
	I1206 09:07:16.014488  249953 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:07:16.014506  249953 cni.go:84] Creating CNI manager for ""
	I1206 09:07:16.014513  249953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:16.016805  249953 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:07:12.850241  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:12.850658  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:12.850714  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:12.850761  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:12.919262  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:12.919285  222653 cri.go:89] found id: ""
	I1206 09:07:12.919344  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:12.919425  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.924031  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:12.924088  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:12.962588  222653 cri.go:89] found id: ""
	I1206 09:07:12.962613  222653 logs.go:282] 0 containers: []
	W1206 09:07:12.962621  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:12.962628  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:12.962679  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:13.029685  222653 cri.go:89] found id: ""
	I1206 09:07:13.029710  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.029719  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:13.029728  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:13.029780  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:13.069420  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:13.069444  222653 cri.go:89] found id: ""
	I1206 09:07:13.069455  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:13.069511  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:13.073704  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:13.073763  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:13.121197  222653 cri.go:89] found id: ""
	I1206 09:07:13.121223  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.121233  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:13.121241  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:13.121303  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:13.166008  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:13.166032  222653 cri.go:89] found id: ""
	I1206 09:07:13.166042  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:13.166125  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:13.170532  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:13.170611  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:13.207075  222653 cri.go:89] found id: ""
	I1206 09:07:13.207102  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.207112  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:13.207120  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:13.207178  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:13.242714  222653 cri.go:89] found id: ""
	I1206 09:07:13.242739  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.242750  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:13.242760  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:13.242774  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:13.304263  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:13.304284  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:13.304295  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:13.345157  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:13.345189  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:13.415592  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:13.415624  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:13.449901  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:13.449927  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:13.493945  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:13.493974  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:13.531541  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:13.531562  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:13.622614  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:13.622648  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:16.139891  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:16.140406  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:16.140464  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:16.140522  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:16.180136  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:16.180160  222653 cri.go:89] found id: ""
	I1206 09:07:16.180171  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:16.180228  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.184945  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:16.185030  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:16.225521  222653 cri.go:89] found id: ""
	I1206 09:07:16.225550  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.225561  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:16.225568  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:16.225619  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:16.285459  222653 cri.go:89] found id: ""
	I1206 09:07:16.285490  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.285499  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:16.285507  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:16.285567  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:16.328689  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:16.328711  222653 cri.go:89] found id: ""
	I1206 09:07:16.328721  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:16.328776  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.332610  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:16.332676  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:16.367770  222653 cri.go:89] found id: ""
	I1206 09:07:16.367796  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.367807  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:16.367815  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:16.367870  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:16.406206  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:16.406231  222653 cri.go:89] found id: ""
	I1206 09:07:16.406242  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:16.406294  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.410111  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:16.410189  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:12.706781  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:12.707242  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:12.707304  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:12.707428  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:12.744698  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:12.744725  224160 cri.go:89] found id: ""
	I1206 09:07:12.744735  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:12.744784  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.749332  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:12.749402  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:12.777451  224160 cri.go:89] found id: ""
	I1206 09:07:12.777480  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.777492  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:12.777507  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:12.777572  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:12.805458  224160 cri.go:89] found id: ""
	I1206 09:07:12.805490  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.805502  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:12.805510  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:12.805567  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:12.838189  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:12.838210  224160 cri.go:89] found id: ""
	I1206 09:07:12.838218  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:12.838262  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.843559  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:12.843638  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:12.890786  224160 cri.go:89] found id: ""
	I1206 09:07:12.890816  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.890852  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:12.890861  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:12.892037  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:12.926302  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:12.926321  224160 cri.go:89] found id: ""
	I1206 09:07:12.926331  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:12.926385  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.930343  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:12.930401  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:12.960946  224160 cri.go:89] found id: ""
	I1206 09:07:12.960966  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.960974  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:12.960980  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:12.961048  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:13.006621  224160 cri.go:89] found id: ""
	I1206 09:07:13.006664  224160 logs.go:282] 0 containers: []
	W1206 09:07:13.006674  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:13.006685  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:13.006699  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:13.049247  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:13.049282  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:13.120614  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:13.120656  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:13.160599  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:13.160635  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:13.246350  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:13.246379  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:13.260674  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:13.260697  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:13.320787  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:13.320805  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:13.320816  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:13.352599  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:13.352626  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:15.884062  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:15.884422  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:15.884480  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:15.884540  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:15.911845  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:15.911867  224160 cri.go:89] found id: ""
	I1206 09:07:15.911876  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:15.911928  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:15.915912  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:15.916013  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:15.945043  224160 cri.go:89] found id: ""
	I1206 09:07:15.945069  224160 logs.go:282] 0 containers: []
	W1206 09:07:15.945081  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:15.945088  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:15.945152  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:15.983423  224160 cri.go:89] found id: ""
	I1206 09:07:15.983451  224160 logs.go:282] 0 containers: []
	W1206 09:07:15.983462  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:15.983469  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:15.983522  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:16.018180  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:16.018198  224160 cri.go:89] found id: ""
	I1206 09:07:16.018208  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:16.018257  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.022266  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:16.022328  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:16.052872  224160 cri.go:89] found id: ""
	I1206 09:07:16.052897  224160 logs.go:282] 0 containers: []
	W1206 09:07:16.052907  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:16.052916  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:16.052972  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:16.082270  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:16.082292  224160 cri.go:89] found id: ""
	I1206 09:07:16.082301  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:16.082357  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.086421  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:16.086486  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:16.115829  224160 cri.go:89] found id: ""
	I1206 09:07:16.115855  224160 logs.go:282] 0 containers: []
	W1206 09:07:16.115866  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:16.115874  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:16.115930  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:16.144090  224160 cri.go:89] found id: ""
	I1206 09:07:16.144126  224160 logs.go:282] 0 containers: []
	W1206 09:07:16.144136  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:16.144148  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:16.144168  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:16.178435  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:16.178462  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:16.244309  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:16.244345  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:16.291793  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:16.291818  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:16.414761  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:16.414794  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:16.429889  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:16.429920  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:16.498222  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:16.498243  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:16.498258  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:16.539036  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:16.539070  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:15.748318  255989 provision.go:177] copyRemoteCerts
	I1206 09:07:15.748380  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:07:15.748412  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.767082  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:15.875466  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:07:15.898225  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:07:15.917318  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:07:15.935203  255989 provision.go:87] duration metric: took 247.458267ms to configureAuth
	I1206 09:07:15.935232  255989 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:07:15.935432  255989 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:15.935541  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.955701  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:15.955897  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:15.955913  255989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:07:16.274227  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:07:16.274255  255989 machine.go:97] duration metric: took 4.060196139s to provisionDockerMachine
	I1206 09:07:16.274286  255989 client.go:176] duration metric: took 5.339676742s to LocalClient.Create
	I1206 09:07:16.274309  255989 start.go:167] duration metric: took 5.33975868s to libmachine.API.Create "no-preload-769733"
	I1206 09:07:16.274321  255989 start.go:293] postStartSetup for "no-preload-769733" (driver="docker")
	I1206 09:07:16.274343  255989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:07:16.274416  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:07:16.274471  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.297640  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.398592  255989 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:07:16.403160  255989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:07:16.403196  255989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:07:16.403209  255989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:07:16.403269  255989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:07:16.403451  255989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:07:16.403583  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:07:16.412336  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:07:16.435598  255989 start.go:296] duration metric: took 161.262623ms for postStartSetup
	I1206 09:07:16.436050  255989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-769733
	I1206 09:07:16.458102  255989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/config.json ...
	I1206 09:07:16.458396  255989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:07:16.458448  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.482632  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.582121  255989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:07:16.587163  255989 start.go:128] duration metric: took 5.655460621s to createHost
	I1206 09:07:16.587197  255989 start.go:83] releasing machines lock for "no-preload-769733", held for 5.655591978s
	I1206 09:07:16.587271  255989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-769733
	I1206 09:07:16.609854  255989 ssh_runner.go:195] Run: cat /version.json
	I1206 09:07:16.609907  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.609936  255989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:07:16.610034  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.632017  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.632365  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.789964  255989 ssh_runner.go:195] Run: systemctl --version
	I1206 09:07:16.797546  255989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:07:16.837652  255989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:07:16.843184  255989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:07:16.843241  255989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:07:16.875435  255989 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:07:16.875460  255989 start.go:496] detecting cgroup driver to use...
	I1206 09:07:16.875509  255989 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:07:16.875576  255989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:07:16.898701  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:07:16.914622  255989 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:07:16.914693  255989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:07:16.936570  255989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:07:16.966653  255989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:07:17.061144  255989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:07:17.146251  255989 docker.go:234] disabling docker service ...
	I1206 09:07:17.146314  255989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:07:17.165775  255989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:07:17.178776  255989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:07:17.262233  255989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:07:17.344760  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:07:17.357901  255989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:07:17.372631  255989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:07:17.372689  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.383601  255989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:07:17.383675  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.393092  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.402233  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.411567  255989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:07:17.420388  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.430003  255989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.444618  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.453876  255989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:07:17.462270  255989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:07:17.470410  255989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:17.552642  255989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:07:17.697306  255989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:07:17.697384  255989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:07:17.701411  255989 start.go:564] Will wait 60s for crictl version
	I1206 09:07:17.701458  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:17.705201  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:07:17.730717  255989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:07:17.730804  255989 ssh_runner.go:195] Run: crio --version
	I1206 09:07:17.758759  255989 ssh_runner.go:195] Run: crio --version
	I1206 09:07:17.788391  255989 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 09:07:17.789487  255989 cli_runner.go:164] Run: docker network inspect no-preload-769733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:07:17.807311  255989 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:07:17.811445  255989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:07:17.821852  255989 kubeadm.go:884] updating cluster {Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:07:17.821972  255989 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:07:17.822034  255989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:07:17.845595  255989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 09:07:17.845620  255989 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 09:07:17.845731  255989 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:17.845764  255989 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:17.845800  255989 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:17.845728  255989 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:17.845763  255989 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:17.845749  255989 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:17.845766  255989 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1206 09:07:17.845740  255989 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:17.846925  255989 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:17.846942  255989 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:17.846945  255989 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:17.846950  255989 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:17.846949  255989 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1206 09:07:17.846934  255989 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:17.846951  255989 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:17.846925  255989 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:17.961600  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:17.970315  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:17.972149  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:17.975463  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:17.989933  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:17.995032  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1206 09:07:17.996719  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.008328  255989 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1206 09:07:18.008378  255989 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.008429  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.014677  255989 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1206 09:07:18.014720  255989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.014789  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.017818  255989 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1206 09:07:18.017869  255989 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.017915  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067676  255989 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1206 09:07:18.067720  255989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.067718  255989 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1206 09:07:18.067751  255989 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1206 09:07:18.067770  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067790  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067685  255989 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1206 09:07:18.067822  255989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.067837  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.067748  255989 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1206 09:07:18.067878  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.067878  255989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.067918  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067801  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.067858  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.072890  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.072898  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:07:18.100606  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.100675  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.100731  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.100773  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.100689  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.106162  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:07:18.106175  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.137196  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.137771  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.140334  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.142944  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.143036  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.143059  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:07:18.142945  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.174636  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.174748  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.174838  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.177808  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:07:18.177946  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:07:18.180760  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.180768  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1206 09:07:18.180836  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:07:18.181397  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1206 09:07:18.181424  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:07:18.181501  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1206 09:07:18.181509  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:07:18.202855  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:18.202958  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:18.203009  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1206 09:07:18.202959  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.203039  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1206 09:07:18.203058  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1206 09:07:18.214856  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:18.214904  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.214929  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1206 09:07:18.214862  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1206 09:07:18.214956  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:18.214958  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1206 09:07:18.214958  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1206 09:07:18.214974  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1206 09:07:18.215041  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.215067  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1206 09:07:18.348891  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.348933  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1206 09:07:18.376009  255989 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1206 09:07:18.376089  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1206 09:07:18.858609  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1206 09:07:18.858655  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.858701  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.910941  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:20.016886  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.158156195s)
	I1206 09:07:20.016916  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1206 09:07:20.016944  255989 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:07:20.016954  255989 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.105980936s)
	I1206 09:07:20.017037  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:07:20.017107  255989 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 09:07:20.017156  255989 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:20.017210  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.017979  249953 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:07:16.022210  249953 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1206 09:07:16.022226  249953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:07:16.035082  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:07:16.805703  249953 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:07:16.805781  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:16.805811  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-322324 minikube.k8s.io/updated_at=2025_12_06T09_07_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=old-k8s-version-322324 minikube.k8s.io/primary=true
	I1206 09:07:16.816361  249953 ops.go:34] apiserver oom_adj: -16
	I1206 09:07:16.907267  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:17.408200  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:17.908280  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:18.407683  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:18.908231  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:19.407851  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:19.908197  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:20.408163  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:20.907613  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:16.451544  222653 cri.go:89] found id: ""
	I1206 09:07:16.451572  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.451582  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:16.451590  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:16.451648  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:16.493766  222653 cri.go:89] found id: ""
	I1206 09:07:16.493794  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.493805  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:16.493815  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:16.493830  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:16.600529  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:16.600563  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:16.618829  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:16.618862  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:16.691485  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:16.691505  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:16.691519  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:16.733495  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:16.733529  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:16.810525  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:16.810554  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:16.853117  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:16.853193  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:16.916407  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:16.916436  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:19.476980  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:19.477495  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:19.477547  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:19.477604  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:19.523474  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:19.523499  222653 cri.go:89] found id: ""
	I1206 09:07:19.523510  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:19.523564  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.528345  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:19.528414  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:19.574594  222653 cri.go:89] found id: ""
	I1206 09:07:19.574624  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.574635  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:19.574643  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:19.574699  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:19.616375  222653 cri.go:89] found id: ""
	I1206 09:07:19.616403  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.616414  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:19.616423  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:19.616482  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:19.663286  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:19.663312  222653 cri.go:89] found id: ""
	I1206 09:07:19.663321  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:19.663385  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.668564  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:19.668634  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:19.714113  222653 cri.go:89] found id: ""
	I1206 09:07:19.714139  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.714150  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:19.714157  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:19.714211  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:19.756842  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:19.756874  222653 cri.go:89] found id: ""
	I1206 09:07:19.756885  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:19.756950  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.761470  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:19.761549  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:19.799903  222653 cri.go:89] found id: ""
	I1206 09:07:19.799925  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.799934  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:19.799946  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:19.800011  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:19.836747  222653 cri.go:89] found id: ""
	I1206 09:07:19.836791  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.836800  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:19.836809  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:19.836824  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:19.875524  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:19.875557  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:19.931634  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:19.931672  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:19.981752  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:19.981785  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:20.076548  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:20.076582  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:20.094654  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:20.094686  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:20.162170  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:20.162188  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:20.162200  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:20.202009  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:20.202079  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:19.069352  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:19.069765  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:19.069832  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:19.069880  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:19.095516  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:19.095539  224160 cri.go:89] found id: ""
	I1206 09:07:19.095547  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:19.095602  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.099652  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:19.099713  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:19.125980  224160 cri.go:89] found id: ""
	I1206 09:07:19.126028  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.126037  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:19.126044  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:19.126116  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:19.157560  224160 cri.go:89] found id: ""
	I1206 09:07:19.157585  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.157596  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:19.157603  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:19.157662  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:19.185043  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:19.185072  224160 cri.go:89] found id: ""
	I1206 09:07:19.185082  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:19.185140  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.189218  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:19.189278  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:19.216149  224160 cri.go:89] found id: ""
	I1206 09:07:19.216176  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.216188  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:19.216196  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:19.216256  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:19.248358  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:19.248382  224160 cri.go:89] found id: ""
	I1206 09:07:19.248391  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:19.248447  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.253303  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:19.253360  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:19.282404  224160 cri.go:89] found id: ""
	I1206 09:07:19.282435  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.282447  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:19.282455  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:19.282519  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:19.312767  224160 cri.go:89] found id: ""
	I1206 09:07:19.312788  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.312796  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:19.312805  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:19.312815  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:19.343035  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:19.343069  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:19.419701  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:19.419793  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:19.458441  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:19.458478  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:19.577004  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:19.577055  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:19.594808  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:19.594845  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:19.668665  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:19.668688  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:19.668703  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:19.704110  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:19.704139  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:21.244467  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.227403916s)
	I1206 09:07:21.244501  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1206 09:07:21.244522  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:07:21.244520  255989 ssh_runner.go:235] Completed: which crictl: (1.227291112s)
	I1206 09:07:21.244574  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:07:21.244577  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:22.325694  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.081089159s)
	I1206 09:07:22.325734  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1206 09:07:22.325756  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:22.325811  255989 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.081136917s)
	I1206 09:07:22.325886  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:22.325819  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:22.355941  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:23.578476  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.252550319s)
	I1206 09:07:23.578515  255989 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.222546604s)
	I1206 09:07:23.578517  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1206 09:07:23.578541  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:23.578546  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 09:07:23.578580  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:23.578625  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:07:24.931278  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.352673931s)
	I1206 09:07:24.931299  255989 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.352652271s)
	I1206 09:07:24.931312  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1206 09:07:24.931326  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 09:07:24.931340  255989 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:07:24.931345  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1206 09:07:24.931384  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:07:21.407485  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:21.907905  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:22.407632  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:22.908214  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:23.408097  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:23.907323  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:24.408230  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:24.908260  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:25.409130  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:25.907386  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:22.782066  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:22.782550  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:22.782621  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:22.782766  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:22.819387  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:22.819410  222653 cri.go:89] found id: ""
	I1206 09:07:22.819421  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:22.819477  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.824130  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:22.824204  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:22.867457  222653 cri.go:89] found id: ""
	I1206 09:07:22.867486  222653 logs.go:282] 0 containers: []
	W1206 09:07:22.867495  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:22.867503  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:22.867563  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:22.914264  222653 cri.go:89] found id: ""
	I1206 09:07:22.914290  222653 logs.go:282] 0 containers: []
	W1206 09:07:22.914301  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:22.914322  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:22.914380  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:22.954438  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:22.954465  222653 cri.go:89] found id: ""
	I1206 09:07:22.954475  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:22.954536  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.958805  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:22.958869  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:23.002279  222653 cri.go:89] found id: ""
	I1206 09:07:23.002308  222653 logs.go:282] 0 containers: []
	W1206 09:07:23.002318  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:23.002326  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:23.002388  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:23.039308  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:23.039342  222653 cri.go:89] found id: ""
	I1206 09:07:23.039353  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:23.039407  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:23.043416  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:23.043479  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:23.083536  222653 cri.go:89] found id: ""
	I1206 09:07:23.083558  222653 logs.go:282] 0 containers: []
	W1206 09:07:23.083565  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:23.083571  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:23.083627  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:23.119518  222653 cri.go:89] found id: ""
	I1206 09:07:23.119543  222653 logs.go:282] 0 containers: []
	W1206 09:07:23.119553  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:23.119563  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:23.119578  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:23.193995  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:23.194025  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:23.230380  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:23.230405  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:23.281194  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:23.281232  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:23.325158  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:23.325186  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:23.431223  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:23.431254  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:23.448934  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:23.448962  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:23.521617  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:23.521641  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:23.521656  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:26.062046  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:26.062490  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:26.062546  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:26.062599  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:26.104652  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:26.104672  222653 cri.go:89] found id: ""
	I1206 09:07:26.104681  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:26.104737  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:26.108658  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:26.108727  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:26.148882  222653 cri.go:89] found id: ""
	I1206 09:07:26.148910  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.148920  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:26.148927  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:26.148984  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:26.187305  222653 cri.go:89] found id: ""
	I1206 09:07:26.187330  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.187338  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:26.187345  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:26.187389  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:26.229204  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:26.229229  222653 cri.go:89] found id: ""
	I1206 09:07:26.229240  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:26.229303  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:26.233743  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:26.233821  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:26.270792  222653 cri.go:89] found id: ""
	I1206 09:07:26.270821  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.270836  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:26.270844  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:26.270904  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:26.309623  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:26.309645  222653 cri.go:89] found id: ""
	I1206 09:07:26.309655  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:26.309710  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:26.313667  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:26.313734  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:26.351148  222653 cri.go:89] found id: ""
	I1206 09:07:26.351175  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.351185  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:26.351193  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:26.351247  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:26.389692  222653 cri.go:89] found id: ""
	I1206 09:07:26.389729  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.389741  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:26.389754  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:26.389771  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:26.439423  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:26.439463  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:22.238199  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:22.238765  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:22.238818  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:22.238869  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:22.272767  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:22.272790  224160 cri.go:89] found id: ""
	I1206 09:07:22.272801  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:22.272857  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.277421  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:22.277480  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:22.304689  224160 cri.go:89] found id: ""
	I1206 09:07:22.304715  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.304724  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:22.304730  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:22.304790  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:22.332626  224160 cri.go:89] found id: ""
	I1206 09:07:22.332653  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.332664  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:22.332672  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:22.332725  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:22.363744  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:22.363767  224160 cri.go:89] found id: ""
	I1206 09:07:22.363777  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:22.363832  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.368679  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:22.368748  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:22.399667  224160 cri.go:89] found id: ""
	I1206 09:07:22.399695  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.399706  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:22.399713  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:22.399771  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:22.430379  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:22.430405  224160 cri.go:89] found id: ""
	I1206 09:07:22.430415  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:22.430478  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.434663  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:22.434725  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:22.462530  224160 cri.go:89] found id: ""
	I1206 09:07:22.462559  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.462571  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:22.462578  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:22.462642  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:22.493665  224160 cri.go:89] found id: ""
	I1206 09:07:22.493692  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.493702  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:22.493713  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:22.493725  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:22.588888  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:22.588919  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:22.603368  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:22.603396  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:22.660150  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:22.660172  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:22.660187  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:22.691897  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:22.691936  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:22.719279  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:22.719302  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:22.748448  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:22.748476  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:22.807592  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:22.807627  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:25.344068  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:25.344559  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:25.344607  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:25.344653  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:25.376468  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:25.376494  224160 cri.go:89] found id: ""
	I1206 09:07:25.376505  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:25.376557  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:25.380651  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:25.380704  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:25.411711  224160 cri.go:89] found id: ""
	I1206 09:07:25.411736  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.411747  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:25.411755  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:25.411808  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:25.458026  224160 cri.go:89] found id: ""
	I1206 09:07:25.458057  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.458068  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:25.458077  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:25.458134  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:25.506732  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:25.506754  224160 cri.go:89] found id: ""
	I1206 09:07:25.506763  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:25.506816  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:25.513710  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:25.513837  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:25.554365  224160 cri.go:89] found id: ""
	I1206 09:07:25.554390  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.554408  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:25.554415  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:25.554470  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:25.596695  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:25.596742  224160 cri.go:89] found id: ""
	I1206 09:07:25.596752  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:25.596826  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:25.602118  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:25.602186  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:25.636831  224160 cri.go:89] found id: ""
	I1206 09:07:25.636898  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.636914  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:25.636922  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:25.637073  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:25.675266  224160 cri.go:89] found id: ""
	I1206 09:07:25.675290  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.675300  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:25.675309  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:25.675322  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:25.712437  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:25.712463  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:25.802809  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:25.802846  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:25.817950  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:25.817975  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:25.885512  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:25.885537  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:25.885553  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:25.922034  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:25.922066  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:25.954182  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:25.954212  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:25.985910  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:25.985947  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:26.361903  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.430497145s)
	I1206 09:07:26.361927  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1206 09:07:26.361948  255989 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:07:26.362068  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:07:26.958660  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 09:07:26.958698  255989 cache_images.go:125] Successfully loaded all cached images
	I1206 09:07:26.958705  255989 cache_images.go:94] duration metric: took 9.11307095s to LoadCachedImages
	I1206 09:07:26.958720  255989 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:07:26.958809  255989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-769733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:07:26.958898  255989 ssh_runner.go:195] Run: crio config
	I1206 09:07:27.004545  255989 cni.go:84] Creating CNI manager for ""
	I1206 09:07:27.004566  255989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:27.004583  255989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:07:27.004602  255989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-769733 NodeName:no-preload-769733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:07:27.004761  255989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-769733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:07:27.004826  255989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:07:27.012998  255989 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1206 09:07:27.013055  255989 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:07:27.020869  255989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1206 09:07:27.020923  255989 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1206 09:07:27.020965  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1206 09:07:27.020957  255989 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1206 09:07:27.025191  255989 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1206 09:07:27.025222  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1206 09:07:27.737595  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:07:27.751161  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1206 09:07:27.755029  255989 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1206 09:07:27.755060  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1206 09:07:27.847373  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1206 09:07:27.860184  255989 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1206 09:07:27.860245  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1206 09:07:28.086332  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:07:28.095015  255989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:07:28.107611  255989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:07:28.181625  255989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:07:28.195170  255989 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:07:28.199028  255989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:07:28.218317  255989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:28.301493  255989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:07:28.323234  255989 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733 for IP: 192.168.94.2
	I1206 09:07:28.323256  255989 certs.go:195] generating shared ca certs ...
	I1206 09:07:28.323278  255989 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.323446  255989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:07:28.323487  255989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:07:28.323497  255989 certs.go:257] generating profile certs ...
	I1206 09:07:28.323548  255989 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.key
	I1206 09:07:28.323561  255989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt with IP's: []
	I1206 09:07:28.439838  255989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt ...
	I1206 09:07:28.439864  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: {Name:mk51ce1a337b109238ea95988a6d82b04abffa87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.440048  255989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.key ...
	I1206 09:07:28.440063  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.key: {Name:mk549eb3bee0556ac6670ffc50072f5f60e88eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.440148  255989 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7
	I1206 09:07:28.440164  255989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:07:28.513593  255989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7 ...
	I1206 09:07:28.513628  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7: {Name:mkbd6a20e4f216916338facbe5f5c86a546ef2d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.513836  255989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7 ...
	I1206 09:07:28.513858  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7: {Name:mk38235e3e898831eee31ebf5b7782ea0c001e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.513962  255989 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt
	I1206 09:07:28.514099  255989 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key
	I1206 09:07:28.514180  255989 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key
	I1206 09:07:28.514203  255989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt with IP's: []
	I1206 09:07:28.576097  255989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt ...
	I1206 09:07:28.576120  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt: {Name:mka0e374df5d33e71d4cc208952fa17a2348f688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.576288  255989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key ...
	I1206 09:07:28.576304  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key: {Name:mk810daf6c924b5eb6053d90018cda8997f74e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.576534  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:07:28.576581  255989 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:07:28.576596  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:07:28.576632  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:07:28.576675  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:07:28.576713  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:07:28.576775  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:07:28.577491  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:07:28.596884  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:07:28.616262  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:07:28.635527  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:07:28.653723  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:07:28.673896  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:07:28.693706  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:07:28.712568  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:07:28.731423  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:07:28.753516  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:07:28.772171  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:07:28.791901  255989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:07:28.806351  255989 ssh_runner.go:195] Run: openssl version
	I1206 09:07:28.812513  255989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.820357  255989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:07:28.827774  255989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.831658  255989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.831712  255989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.869520  255989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:28.878297  255989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:28.886554  255989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.894673  255989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:07:28.902154  255989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.905942  255989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.906001  255989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.949359  255989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:07:28.959758  255989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:07:28.970930  255989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:07:28.979436  255989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:07:28.987970  255989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:07:28.992224  255989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:07:28.992279  255989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:07:29.030890  255989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:07:29.040374  255989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:07:29.048917  255989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:07:29.053467  255989 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:07:29.053531  255989 kubeadm.go:401] StartCluster: {Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:29.053620  255989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:07:29.053691  255989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:07:29.083934  255989 cri.go:89] found id: ""
	I1206 09:07:29.084033  255989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:07:29.092853  255989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:07:29.102213  255989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:07:29.102279  255989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:07:29.110683  255989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:07:29.110705  255989 kubeadm.go:158] found existing configuration files:
	
	I1206 09:07:29.110750  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:07:29.118482  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:07:29.118539  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:07:29.126578  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:07:29.135512  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:07:29.135577  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:07:29.144628  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:07:29.152275  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:07:29.152334  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:07:29.159452  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:07:29.166720  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:07:29.166764  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:07:29.173788  255989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:07:29.210630  255989 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:07:29.210708  255989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:07:29.278372  255989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:07:29.278489  255989 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:07:29.278547  255989 kubeadm.go:319] OS: Linux
	I1206 09:07:29.278622  255989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:07:29.278710  255989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:07:29.278771  255989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:07:29.278860  255989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:07:29.278936  255989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:07:29.279024  255989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:07:29.279089  255989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:07:29.279145  255989 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:07:29.337164  255989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:07:29.337326  255989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:07:29.337466  255989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:07:29.356240  255989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:07:26.408187  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:26.907752  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:27.407937  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:27.908341  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:28.407456  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:28.908145  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:29.408316  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:29.478685  249953 kubeadm.go:1114] duration metric: took 12.672959683s to wait for elevateKubeSystemPrivileges
	I1206 09:07:29.478722  249953 kubeadm.go:403] duration metric: took 23.249181397s to StartCluster
	I1206 09:07:29.478742  249953 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:29.478811  249953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:07:29.479779  249953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:29.480059  249953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:07:29.480060  249953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:07:29.480151  249953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:07:29.480265  249953 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-322324"
	I1206 09:07:29.480289  249953 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-322324"
	I1206 09:07:29.480301  249953 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:07:29.480320  249953 host.go:66] Checking if "old-k8s-version-322324" exists ...
	I1206 09:07:29.480322  249953 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-322324"
	I1206 09:07:29.480370  249953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-322324"
	I1206 09:07:29.480696  249953 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:07:29.480827  249953 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:07:29.481862  249953 out.go:179] * Verifying Kubernetes components...
	I1206 09:07:29.483374  249953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:29.508389  249953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:29.509016  249953 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-322324"
	I1206 09:07:29.509060  249953 host.go:66] Checking if "old-k8s-version-322324" exists ...
	I1206 09:07:29.509533  249953 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:07:29.509548  249953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:29.509565  249953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:07:29.509613  249953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-322324
	I1206 09:07:29.539626  249953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:29.539726  249953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:07:29.539812  249953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-322324
	I1206 09:07:29.543514  249953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/old-k8s-version-322324/id_rsa Username:docker}
	I1206 09:07:29.566274  249953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/old-k8s-version-322324/id_rsa Username:docker}
	I1206 09:07:29.591608  249953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:07:29.632145  249953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:07:29.658455  249953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:29.680360  249953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:29.842612  249953 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:07:29.843684  249953 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-322324" to be "Ready" ...
	I1206 09:07:30.144494  249953 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:07:29.358225  255989 out.go:252]   - Generating certificates and keys ...
	I1206 09:07:29.358352  255989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:07:29.358479  255989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:07:29.426850  255989 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:07:29.610107  255989 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:07:29.669469  255989 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:07:29.723858  255989 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:07:29.770042  255989 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:07:29.772393  255989 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-769733] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:07:29.924724  255989 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:07:29.925069  255989 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-769733] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:07:30.010258  255989 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:07:30.044426  255989 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:07:30.110551  255989 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:07:30.110856  255989 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:07:30.242376  255989 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:07:30.504759  255989 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:07:30.656935  255989 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:07:30.787172  255989 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:07:30.865647  255989 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:07:30.866371  255989 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:07:30.872632  255989 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:07:30.145558  249953 addons.go:530] duration metric: took 665.402168ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:07:30.347940  249953 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-322324" context rescaled to 1 replicas
	I1206 09:07:26.513039  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:26.513069  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:26.548609  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:26.548633  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:26.595523  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:26.595555  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:26.634833  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:26.634868  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:26.726954  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:26.726996  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:26.744662  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:26.744692  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:26.811253  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:29.312056  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:28.547118  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:28.547489  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:28.547545  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:28.547601  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:28.574655  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:28.574675  224160 cri.go:89] found id: ""
	I1206 09:07:28.574682  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:28.574729  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:28.578748  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:28.578813  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:28.606204  224160 cri.go:89] found id: ""
	I1206 09:07:28.606229  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.606240  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:28.606248  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:28.606300  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:28.633905  224160 cri.go:89] found id: ""
	I1206 09:07:28.633935  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.633945  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:28.633959  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:28.634030  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:28.661910  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:28.661932  224160 cri.go:89] found id: ""
	I1206 09:07:28.661941  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:28.662028  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:28.666516  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:28.666575  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:28.693859  224160 cri.go:89] found id: ""
	I1206 09:07:28.693886  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.693899  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:28.693907  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:28.693966  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:28.721458  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:28.721481  224160 cri.go:89] found id: ""
	I1206 09:07:28.721497  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:28.721560  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:28.725272  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:28.725350  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:28.752770  224160 cri.go:89] found id: ""
	I1206 09:07:28.752799  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.752809  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:28.752816  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:28.752875  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:28.780329  224160 cri.go:89] found id: ""
	I1206 09:07:28.780355  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.780366  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:28.780377  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:28.780429  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:28.838478  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:28.838504  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:28.869185  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:28.869214  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:28.962944  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:28.962983  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:28.979527  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:28.979551  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:29.039801  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:29.039820  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:29.039831  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:29.073859  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:29.073887  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:29.104962  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:29.105013  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:31.634755  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:31.635190  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:31.635240  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:31.635288  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:31.664881  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:31.664907  224160 cri.go:89] found id: ""
	I1206 09:07:31.664917  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:31.664975  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:31.669962  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:31.670043  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:31.700237  224160 cri.go:89] found id: ""
	I1206 09:07:31.700260  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.700271  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:31.700278  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:31.700344  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:31.734931  224160 cri.go:89] found id: ""
	I1206 09:07:31.734958  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.734968  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:31.734976  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:31.735050  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:31.768414  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:31.768439  224160 cri.go:89] found id: ""
	I1206 09:07:31.768448  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:31.768507  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:31.774023  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:31.774102  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:31.808546  224160 cri.go:89] found id: ""
	I1206 09:07:31.808576  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.808589  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:31.808597  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:31.808661  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:31.840967  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:31.841316  224160 cri.go:89] found id: ""
	I1206 09:07:31.841342  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:31.841415  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:31.846757  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:31.846821  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:31.877072  224160 cri.go:89] found id: ""
	I1206 09:07:31.877099  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.877110  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:31.877118  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:31.877175  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:31.905962  224160 cri.go:89] found id: ""
	I1206 09:07:31.906014  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.906027  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:31.906038  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:31.906069  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:31.971232  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:31.971256  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:31.971273  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:32.004963  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:32.005026  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:32.033161  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:32.033188  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:32.060503  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:32.060529  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:32.113812  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:32.113850  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:32.144542  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:32.144571  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:30.874358  255989 out.go:252]   - Booting up control plane ...
	I1206 09:07:30.874487  255989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:07:30.874606  255989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:07:30.875686  255989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:07:30.893709  255989 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:07:30.893889  255989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:07:30.900923  255989 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:07:30.901253  255989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:07:30.901335  255989 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:07:31.012330  255989 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:07:31.012462  255989 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:07:31.514149  255989 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.87682ms
	I1206 09:07:31.517373  255989 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:07:31.517504  255989 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:07:31.517633  255989 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:07:31.517727  255989 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:07:32.022121  255989 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.568081ms
	I1206 09:07:33.390162  255989 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.872670055s
	I1206 09:07:35.520478  255989 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.003048064s
	I1206 09:07:35.537470  255989 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:07:35.548264  255989 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:07:35.556402  255989 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:07:35.556719  255989 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-769733 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:07:35.564904  255989 kubeadm.go:319] [bootstrap-token] Using token: 595w8g.4ay26dwior6u2ehq
	I1206 09:07:35.566977  255989 out.go:252]   - Configuring RBAC rules ...
	I1206 09:07:35.567130  255989 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:07:35.570103  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:07:35.575169  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:07:35.577548  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:07:35.579866  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:07:35.582193  255989 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	W1206 09:07:31.849195  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	W1206 09:07:34.347852  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	I1206 09:07:34.314394  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 09:07:34.314486  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:34.314552  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:34.354980  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:34.355028  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:34.355034  222653 cri.go:89] found id: ""
	I1206 09:07:34.355043  222653 logs.go:282] 2 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:34.355093  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.359478  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.363487  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:34.363554  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:34.399236  222653 cri.go:89] found id: ""
	I1206 09:07:34.399263  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.399272  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:34.399278  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:34.399323  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:34.435457  222653 cri.go:89] found id: ""
	I1206 09:07:34.435478  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.435484  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:34.435489  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:34.435543  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:34.473941  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:34.473967  222653 cri.go:89] found id: ""
	I1206 09:07:34.473978  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:34.474044  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.478215  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:34.478286  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:34.514284  222653 cri.go:89] found id: ""
	I1206 09:07:34.514307  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.514314  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:34.514319  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:34.514384  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:34.551124  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:34.551148  222653 cri.go:89] found id: ""
	I1206 09:07:34.551157  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:34.551212  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.555723  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:34.555796  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:34.592494  222653 cri.go:89] found id: ""
	I1206 09:07:34.592522  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.592532  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:34.592539  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:34.592585  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:34.633451  222653 cri.go:89] found id: ""
	I1206 09:07:34.633475  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.633486  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:34.633504  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:34.633518  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 09:07:35.927065  255989 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:07:36.340971  255989 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:07:36.926458  255989 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:07:36.927542  255989 kubeadm.go:319] 
	I1206 09:07:36.927624  255989 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:07:36.927635  255989 kubeadm.go:319] 
	I1206 09:07:36.927728  255989 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:07:36.927737  255989 kubeadm.go:319] 
	I1206 09:07:36.927780  255989 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:07:36.927843  255989 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:07:36.927889  255989 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:07:36.927895  255989 kubeadm.go:319] 
	I1206 09:07:36.927983  255989 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:07:36.928020  255989 kubeadm.go:319] 
	I1206 09:07:36.928103  255989 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:07:36.928112  255989 kubeadm.go:319] 
	I1206 09:07:36.928181  255989 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:07:36.928271  255989 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:07:36.928390  255989 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:07:36.928409  255989 kubeadm.go:319] 
	I1206 09:07:36.928532  255989 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:07:36.928643  255989 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:07:36.928652  255989 kubeadm.go:319] 
	I1206 09:07:36.928789  255989 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 595w8g.4ay26dwior6u2ehq \
	I1206 09:07:36.928953  255989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:07:36.929010  255989 kubeadm.go:319] 	--control-plane 
	I1206 09:07:36.929019  255989 kubeadm.go:319] 
	I1206 09:07:36.929155  255989 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:07:36.929168  255989 kubeadm.go:319] 
	I1206 09:07:36.929290  255989 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 595w8g.4ay26dwior6u2ehq \
	I1206 09:07:36.929446  255989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:07:36.931415  255989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:07:36.931566  255989 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:07:36.931598  255989 cni.go:84] Creating CNI manager for ""
	I1206 09:07:36.931611  255989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:36.935641  255989 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:07:32.232881  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:32.232919  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:34.749601  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:34.750065  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:34.750121  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:34.750180  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:34.779404  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:34.779422  224160 cri.go:89] found id: ""
	I1206 09:07:34.779433  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:34.779478  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.783840  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:34.783899  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:34.811525  224160 cri.go:89] found id: ""
	I1206 09:07:34.811555  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.811565  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:34.811574  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:34.811649  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:34.840881  224160 cri.go:89] found id: ""
	I1206 09:07:34.840919  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.840931  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:34.840940  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:34.841035  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:34.868271  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:34.868290  224160 cri.go:89] found id: ""
	I1206 09:07:34.868300  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:34.868354  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.872625  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:34.872683  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:34.901139  224160 cri.go:89] found id: ""
	I1206 09:07:34.901166  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.901175  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:34.901180  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:34.901226  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:34.927730  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:34.927748  224160 cri.go:89] found id: ""
	I1206 09:07:34.927755  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:34.927827  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.932708  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:34.932779  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:34.966267  224160 cri.go:89] found id: ""
	I1206 09:07:34.966296  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.966306  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:34.966313  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:34.966372  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:35.002643  224160 cri.go:89] found id: ""
	I1206 09:07:35.002672  224160 logs.go:282] 0 containers: []
	W1206 09:07:35.002683  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:35.002694  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:35.002708  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:35.038650  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:35.038682  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:35.138295  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:35.138333  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:35.155748  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:35.155780  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:35.221461  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:35.221481  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:35.221496  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:35.258011  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:35.258044  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:35.290805  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:35.290850  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:35.322926  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:35.322950  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:36.936891  255989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:07:36.941636  255989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1206 09:07:36.941658  255989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:07:36.956343  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:07:37.182932  255989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:07:37.183022  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:37.183052  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-769733 minikube.k8s.io/updated_at=2025_12_06T09_07_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=no-preload-769733 minikube.k8s.io/primary=true
	I1206 09:07:37.195768  255989 ops.go:34] apiserver oom_adj: -16
	I1206 09:07:37.274765  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:37.774920  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:38.275113  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:38.775534  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:39.275186  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:39.775711  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:40.275114  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1206 09:07:36.846455  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	W1206 09:07:38.846888  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	I1206 09:07:40.775293  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:41.275747  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:41.346609  255989 kubeadm.go:1114] duration metric: took 4.16367274s to wait for elevateKubeSystemPrivileges
	I1206 09:07:41.346645  255989 kubeadm.go:403] duration metric: took 12.29311805s to StartCluster
	I1206 09:07:41.346667  255989 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:41.346753  255989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:07:41.348124  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:41.348337  255989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:07:41.348365  255989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:07:41.348426  255989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:07:41.348506  255989 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.348525  255989 addons.go:70] Setting storage-provisioner=true in profile "no-preload-769733"
	I1206 09:07:41.348548  255989 addons.go:239] Setting addon storage-provisioner=true in "no-preload-769733"
	I1206 09:07:41.348560  255989 addons.go:70] Setting default-storageclass=true in profile "no-preload-769733"
	I1206 09:07:41.348582  255989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-769733"
	I1206 09:07:41.348585  255989 host.go:66] Checking if "no-preload-769733" exists ...
	I1206 09:07:41.348920  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:41.349080  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:41.351151  255989 out.go:179] * Verifying Kubernetes components...
	I1206 09:07:41.352247  255989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:41.371018  255989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:41.371704  255989 addons.go:239] Setting addon default-storageclass=true in "no-preload-769733"
	I1206 09:07:41.371739  255989 host.go:66] Checking if "no-preload-769733" exists ...
	I1206 09:07:41.372206  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:41.372293  255989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:41.372315  255989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:07:41.372368  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:41.399834  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:41.401887  255989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:41.401909  255989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:07:41.401960  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:41.429582  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:41.446281  255989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:07:41.511869  255989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:07:41.535642  255989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:41.545648  255989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:41.631627  255989 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:07:41.632631  255989 node_ready.go:35] waiting up to 6m0s for node "no-preload-769733" to be "Ready" ...
	I1206 09:07:41.835464  255989 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:07:37.885172  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1206 09:07:41.346980  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	I1206 09:07:41.846494  249953 node_ready.go:49] node "old-k8s-version-322324" is "Ready"
	I1206 09:07:41.846523  249953 node_ready.go:38] duration metric: took 12.002814275s for node "old-k8s-version-322324" to be "Ready" ...
	I1206 09:07:41.846539  249953 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:07:41.846591  249953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:07:41.858774  249953 api_server.go:72] duration metric: took 12.378677713s to wait for apiserver process to appear ...
	I1206 09:07:41.858802  249953 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:07:41.858830  249953 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:07:41.863370  249953 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:07:41.864536  249953 api_server.go:141] control plane version: v1.28.0
	I1206 09:07:41.864565  249953 api_server.go:131] duration metric: took 5.75587ms to wait for apiserver health ...
	I1206 09:07:41.864576  249953 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:07:41.868797  249953 system_pods.go:59] 8 kube-system pods found
	I1206 09:07:41.868827  249953 system_pods.go:61] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:41.868832  249953 system_pods.go:61] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:41.868837  249953 system_pods.go:61] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:41.868841  249953 system_pods.go:61] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:41.868845  249953 system_pods.go:61] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:41.868848  249953 system_pods.go:61] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:41.868851  249953 system_pods.go:61] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:41.868856  249953 system_pods.go:61] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:41.868865  249953 system_pods.go:74] duration metric: took 4.282928ms to wait for pod list to return data ...
	I1206 09:07:41.868874  249953 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:07:41.870888  249953 default_sa.go:45] found service account: "default"
	I1206 09:07:41.870908  249953 default_sa.go:55] duration metric: took 2.026608ms for default service account to be created ...
	I1206 09:07:41.870915  249953 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:07:41.874429  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:41.874460  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:41.874468  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:41.874485  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:41.874494  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:41.874505  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:41.874514  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:41.874519  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:41.874529  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:41.874561  249953 retry.go:31] will retry after 192.117588ms: missing components: kube-dns
	I1206 09:07:42.073303  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:42.073353  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:42.073360  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:42.073368  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:42.073373  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:42.073378  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:42.073383  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:42.073395  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:42.073402  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:42.073418  249953 retry.go:31] will retry after 306.512117ms: missing components: kube-dns
	I1206 09:07:42.389397  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:42.389435  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:42.389451  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:42.389473  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:42.389484  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:42.389493  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:42.389502  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:42.389513  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:42.389536  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:42.389562  249953 retry.go:31] will retry after 418.251259ms: missing components: kube-dns
	I1206 09:07:42.812921  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:42.812954  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:42.812960  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:42.812965  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:42.812969  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:42.812974  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:42.812977  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:42.812980  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:42.812998  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:42.813017  249953 retry.go:31] will retry after 373.953455ms: missing components: kube-dns
	I1206 09:07:43.191920  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:43.191957  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Running
	I1206 09:07:43.191965  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:43.191971  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:43.191976  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:43.191984  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:43.192021  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:43.192030  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:43.192036  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Running
	I1206 09:07:43.192045  249953 system_pods.go:126] duration metric: took 1.321124826s to wait for k8s-apps to be running ...
	I1206 09:07:43.192057  249953 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:07:43.192114  249953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:07:43.207751  249953 system_svc.go:56] duration metric: took 15.683735ms WaitForService to wait for kubelet
	I1206 09:07:43.207780  249953 kubeadm.go:587] duration metric: took 13.727689751s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:07:43.207800  249953 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:07:43.210927  249953 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:07:43.210959  249953 node_conditions.go:123] node cpu capacity is 8
	I1206 09:07:43.210979  249953 node_conditions.go:105] duration metric: took 3.172435ms to run NodePressure ...
	I1206 09:07:43.211017  249953 start.go:242] waiting for startup goroutines ...
	I1206 09:07:43.211032  249953 start.go:247] waiting for cluster config update ...
	I1206 09:07:43.211046  249953 start.go:256] writing updated cluster config ...
	I1206 09:07:43.211352  249953 ssh_runner.go:195] Run: rm -f paused
	I1206 09:07:43.215613  249953 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:07:43.220387  249953 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gf4kq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.225494  249953 pod_ready.go:94] pod "coredns-5dd5756b68-gf4kq" is "Ready"
	I1206 09:07:43.225517  249953 pod_ready.go:86] duration metric: took 5.101903ms for pod "coredns-5dd5756b68-gf4kq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.228616  249953 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.233130  249953 pod_ready.go:94] pod "etcd-old-k8s-version-322324" is "Ready"
	I1206 09:07:43.233156  249953 pod_ready.go:86] duration metric: took 4.515037ms for pod "etcd-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.236328  249953 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.240615  249953 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-322324" is "Ready"
	I1206 09:07:43.240638  249953 pod_ready.go:86] duration metric: took 4.285769ms for pod "kube-apiserver-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.243145  249953 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.619890  249953 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-322324" is "Ready"
	I1206 09:07:43.619916  249953 pod_ready.go:86] duration metric: took 376.751902ms for pod "kube-controller-manager-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.820969  249953 pod_ready.go:83] waiting for pod "kube-proxy-pspsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.220198  249953 pod_ready.go:94] pod "kube-proxy-pspsz" is "Ready"
	I1206 09:07:44.220227  249953 pod_ready.go:86] duration metric: took 399.219428ms for pod "kube-proxy-pspsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.420863  249953 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.820319  249953 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-322324" is "Ready"
	I1206 09:07:44.820354  249953 pod_ready.go:86] duration metric: took 399.451148ms for pod "kube-scheduler-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.820370  249953 pod_ready.go:40] duration metric: took 1.604725918s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:07:44.866194  249953 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1206 09:07:44.867969  249953 out.go:203] 
	W1206 09:07:44.869218  249953 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1206 09:07:44.870240  249953 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1206 09:07:44.871529  249953 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-322324" cluster and "default" namespace by default
	I1206 09:07:41.836687  255989 addons.go:530] duration metric: took 488.265725ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:07:42.135862  255989 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-769733" context rescaled to 1 replicas
	W1206 09:07:43.635662  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	I1206 09:07:44.704568  222653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.071024996s)
	W1206 09:07:44.704617  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 09:07:44.704633  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:44.704647  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:44.743422  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:44.743457  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:44.813286  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:44.813317  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:44.911624  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:44.911658  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:44.929150  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:44.929186  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:44.971310  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:44.971337  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:45.007885  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:45.007910  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:45.063548  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:45.063587  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:42.886157  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 09:07:42.886232  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:42.886296  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:42.916968  224160 cri.go:89] found id: "4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:07:42.917021  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:42.917028  224160 cri.go:89] found id: ""
	I1206 09:07:42.917036  224160 logs.go:282] 2 containers: [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:42.917183  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:42.921948  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:42.926436  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:42.926500  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:42.960276  224160 cri.go:89] found id: ""
	I1206 09:07:42.960306  224160 logs.go:282] 0 containers: []
	W1206 09:07:42.960317  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:42.960329  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:42.960391  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:43.003349  224160 cri.go:89] found id: ""
	I1206 09:07:43.003378  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.003388  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:43.003395  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:43.003467  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:43.036071  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:43.036095  224160 cri.go:89] found id: ""
	I1206 09:07:43.036106  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:43.036169  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:43.040573  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:43.040643  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:43.072172  224160 cri.go:89] found id: ""
	I1206 09:07:43.072200  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.072210  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:43.072217  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:43.072275  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:43.105694  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:43.105716  224160 cri.go:89] found id: ""
	I1206 09:07:43.105727  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:43.105786  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:43.110341  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:43.110394  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:43.139980  224160 cri.go:89] found id: ""
	I1206 09:07:43.140020  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.140031  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:43.140038  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:43.140098  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:43.168849  224160 cri.go:89] found id: ""
	I1206 09:07:43.168876  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.168887  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:43.168905  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:43.168920  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:43.266073  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:43.266105  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:46.135558  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	W1206 09:07:48.135635  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	W1206 09:07:50.136042  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	I1206 09:07:47.604853  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:47.605374  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:47.605427  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:47.605488  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:47.640247  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:47.640270  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:47.640276  222653 cri.go:89] found id: ""
	I1206 09:07:47.640285  222653 logs.go:282] 2 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:47.640343  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.644294  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.647782  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:47.647853  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:47.682228  222653 cri.go:89] found id: ""
	I1206 09:07:47.682249  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.682255  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:47.682263  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:47.682306  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:47.716449  222653 cri.go:89] found id: ""
	I1206 09:07:47.716473  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.716482  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:47.716489  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:47.716548  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:47.751665  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:47.751688  222653 cri.go:89] found id: ""
	I1206 09:07:47.751696  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:47.751743  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.755458  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:47.755509  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:47.790335  222653 cri.go:89] found id: ""
	I1206 09:07:47.790359  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.790367  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:47.790373  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:47.790422  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:47.824861  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:47.824883  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:47.824886  222653 cri.go:89] found id: ""
	I1206 09:07:47.824893  222653 logs.go:282] 2 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:47.824936  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.828796  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.832275  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:47.832322  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:47.867447  222653 cri.go:89] found id: ""
	I1206 09:07:47.867468  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.867475  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:47.867481  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:47.867557  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:47.904140  222653 cri.go:89] found id: ""
	I1206 09:07:47.904167  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.904177  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:47.904197  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:47.904211  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:47.920401  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:47.920426  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:47.988341  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:07:47.988373  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:48.023893  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:48.023916  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:48.074221  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:48.074252  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:48.169186  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:48.169215  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:48.229200  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:48.229219  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:48.229232  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:48.268047  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:48.268078  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:48.306185  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:48.306215  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:48.340880  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:48.340905  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:50.881094  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:50.881493  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:50.881540  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:50.881588  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:50.916366  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:50.916386  222653 cri.go:89] found id: ""
	I1206 09:07:50.916393  222653 logs.go:282] 1 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400]
	I1206 09:07:50.916452  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:50.920255  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:50.920318  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:50.954222  222653 cri.go:89] found id: ""
	I1206 09:07:50.954242  222653 logs.go:282] 0 containers: []
	W1206 09:07:50.954255  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:50.954261  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:50.954313  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:50.989922  222653 cri.go:89] found id: ""
	I1206 09:07:50.989950  222653 logs.go:282] 0 containers: []
	W1206 09:07:50.989957  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:50.989979  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:50.990052  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:51.024154  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:51.024174  222653 cri.go:89] found id: ""
	I1206 09:07:51.024183  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:51.024239  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:51.027928  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:51.027983  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:51.064518  222653 cri.go:89] found id: ""
	I1206 09:07:51.064551  222653 logs.go:282] 0 containers: []
	W1206 09:07:51.064563  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:51.064572  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:51.064630  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:51.099738  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:51.099761  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:51.099767  222653 cri.go:89] found id: ""
	I1206 09:07:51.099776  222653 logs.go:282] 2 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:51.099828  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:51.103758  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:51.107314  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:51.107379  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:51.142058  222653 cri.go:89] found id: ""
	I1206 09:07:51.142082  222653 logs.go:282] 0 containers: []
	W1206 09:07:51.142092  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:51.142100  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:51.142159  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:51.176980  222653 cri.go:89] found id: ""
	I1206 09:07:51.177051  222653 logs.go:282] 0 containers: []
	W1206 09:07:51.177059  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:51.177073  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:51.177088  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:51.235708  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:51.235726  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:51.235742  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:51.305544  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:51.305573  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:51.340354  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:51.340390  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:51.377578  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:51.377603  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:51.414929  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:07:51.414953  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:51.449327  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:51.449352  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Dec 06 09:07:42 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:42.056416883Z" level=info msg="Starting container: ab3710e0a623529e53948f831e2073a68fab2c556897cd80fd7a9046a9226417" id=d67d2995-0611-4162-a3ed-eec182336315 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:07:42 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:42.059737447Z" level=info msg="Started container" PID=2175 containerID=ab3710e0a623529e53948f831e2073a68fab2c556897cd80fd7a9046a9226417 description=kube-system/coredns-5dd5756b68-gf4kq/coredns id=d67d2995-0611-4162-a3ed-eec182336315 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3ca7865d0e08c90326f705fd40f03194671b2ea748be7acdc07f47621a1808f
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.336892477Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9f9fd352-2a7a-4ebd-b07b-24484f8812d3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.337020899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.342704675Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6cd5e900b3f4d85781352fa8ecb20646299b438c9759de0c69b739ad0db7f68d UID:b89bf94a-47a6-4b25-9cea-c82defe85ad0 NetNS:/var/run/netns/48e56e84-ceee-4afa-ac10-7d52ea954d1c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000596628}] Aliases:map[]}"
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.342729982Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.352335416Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6cd5e900b3f4d85781352fa8ecb20646299b438c9759de0c69b739ad0db7f68d UID:b89bf94a-47a6-4b25-9cea-c82defe85ad0 NetNS:/var/run/netns/48e56e84-ceee-4afa-ac10-7d52ea954d1c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000596628}] Aliases:map[]}"
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.352483866Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.353281179Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.354127007Z" level=info msg="Ran pod sandbox 6cd5e900b3f4d85781352fa8ecb20646299b438c9759de0c69b739ad0db7f68d with infra container: default/busybox/POD" id=9f9fd352-2a7a-4ebd-b07b-24484f8812d3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.355277791Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a3faec3-b25f-4d28-8c8e-dc914f6d6f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.355419189Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3a3faec3-b25f-4d28-8c8e-dc914f6d6f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.355453303Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3a3faec3-b25f-4d28-8c8e-dc914f6d6f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.356076429Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74208383-3553-4731-8954-3f664ef030ca name=/runtime.v1.ImageService/PullImage
	Dec 06 09:07:45 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:45.357340343Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.724893019Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=74208383-3553-4731-8954-3f664ef030ca name=/runtime.v1.ImageService/PullImage
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.725770735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0160fe12-ab5d-4f20-82e4-9ef9e6d8bbdf name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.727155122Z" level=info msg="Creating container: default/busybox/busybox" id=ed720465-2ff5-48f5-bc2f-b3c4fec3501f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.727279163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.731501109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.731906919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.762364954Z" level=info msg="Created container 30351c85384b419c71d6c75633d5d845b789fda534bb4af2b262741e0ae08184: default/busybox/busybox" id=ed720465-2ff5-48f5-bc2f-b3c4fec3501f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.763281891Z" level=info msg="Starting container: 30351c85384b419c71d6c75633d5d845b789fda534bb4af2b262741e0ae08184" id=e75134bb-21a4-4fab-9588-966031b56fb4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:07:46 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:46.765221037Z" level=info msg="Started container" PID=2251 containerID=30351c85384b419c71d6c75633d5d845b789fda534bb4af2b262741e0ae08184 description=default/busybox/busybox id=e75134bb-21a4-4fab-9588-966031b56fb4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cd5e900b3f4d85781352fa8ecb20646299b438c9759de0c69b739ad0db7f68d
	Dec 06 09:07:53 old-k8s-version-322324 crio[777]: time="2025-12-06T09:07:53.109067868Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	30351c85384b4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   6cd5e900b3f4d       busybox                                          default
	ab3710e0a6235       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   e3ca7865d0e08       coredns-5dd5756b68-gf4kq                         kube-system
	6e46d298307c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   b583201f41e57       storage-provisioner                              kube-system
	fc8612671f9fe       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   9d39fac6e77cf       kindnet-fn4nn                                    kube-system
	da79272ef7f97       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   76d293a8de123       kube-proxy-pspsz                                 kube-system
	36681eb1f5650       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   9159d38789291       kube-scheduler-old-k8s-version-322324            kube-system
	a2424518c591d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   54292881080d6       kube-controller-manager-old-k8s-version-322324   kube-system
	7a053b2fb7a7f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   0ead43b0106b1       kube-apiserver-old-k8s-version-322324            kube-system
	0f95b2a3ecd5b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   52ca3360a5dbb       etcd-old-k8s-version-322324                      kube-system
	
	
	==> coredns [ab3710e0a623529e53948f831e2073a68fab2c556897cd80fd7a9046a9226417] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54153 - 63026 "HINFO IN 9179650101093119828.6139342943603977746. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023828921s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-322324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-322324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=old-k8s-version-322324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_07_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-322324
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:07:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:07:46 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:07:46 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:07:46 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:07:46 +0000   Sat, 06 Dec 2025 09:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-322324
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                a183c0fa-92d5-4537-8c49-640a14d95f5a
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-gf4kq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-322324                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-fn4nn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-322324             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-322324    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-pspsz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-322324             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-322324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-322324 event: Registered Node old-k8s-version-322324 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-322324 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [0f95b2a3ecd5bd1a1dbe6d98ac440c2a00fdc027e6ca45bfed9a26481ab0489a] <==
	{"level":"info","ts":"2025-12-06T09:07:10.815505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-06T09:07:10.815628Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-06T09:07:10.816849Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-06T09:07:10.817025Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-06T09:07:10.817099Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-06T09:07:10.81711Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T09:07:10.817216Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T09:07:11.80406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-06T09:07:11.804107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-06T09:07:11.804126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-06T09:07:11.80414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-06T09:07:11.804149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-06T09:07:11.804161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-06T09:07:11.804171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-06T09:07:11.804882Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:07:11.805441Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-322324 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-06T09:07:11.805454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:07:11.805485Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:07:11.80565Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:07:11.805699Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-06T09:07:11.805726Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-06T09:07:11.805769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:07:11.805802Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:07:11.807291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-06T09:07:11.807652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:07:54 up 50 min,  0 user,  load average: 1.66, 2.06, 1.59
	Linux old-k8s-version-322324 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc8612671f9fe4b6c6fc13468a8aa2ee73631918a2f72980179cd672183fdaa7] <==
	I1206 09:07:31.244366       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:07:31.244647       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:07:31.244779       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:07:31.244795       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:07:31.244819       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:07:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:07:31.447549       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:07:31.447636       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:07:31.447647       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:07:31.448074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:07:31.848821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:07:31.941006       1 metrics.go:72] Registering metrics
	I1206 09:07:31.941115       1 controller.go:711] "Syncing nftables rules"
	I1206 09:07:41.456179       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:07:41.456296       1 main.go:301] handling current node
	I1206 09:07:51.451348       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:07:51.451390       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a053b2fb7a7fe862c2aff3cc8380e269605948dd539ec6a80b615f4257a6e8c] <==
	I1206 09:07:13.075103       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 09:07:13.075278       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 09:07:13.075286       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:07:13.075380       1 aggregator.go:166] initial CRD sync complete...
	I1206 09:07:13.075425       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 09:07:13.075451       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:07:13.075476       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:07:13.077661       1 controller.go:624] quota admission added evaluator for: namespaces
	E1206 09:07:13.082429       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1206 09:07:13.285409       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:07:13.979273       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:07:13.982590       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:07:13.982607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:07:14.368035       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:07:14.399450       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:07:14.484706       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:07:14.490018       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1206 09:07:14.490950       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:07:14.496108       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:07:15.035243       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 09:07:15.802184       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 09:07:15.811620       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:07:15.821594       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1206 09:07:29.241155       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1206 09:07:29.339278       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a2424518c591df4c00bdec6eb5327bb6e10d4d535b9a37cf043b2b0b280ce319] <==
	I1206 09:07:28.689391       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1206 09:07:28.689489       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1206 09:07:28.743900       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1206 09:07:29.085488       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:07:29.135385       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:07:29.135419       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1206 09:07:29.245119       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1206 09:07:29.352746       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fn4nn"
	I1206 09:07:29.354493       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pspsz"
	I1206 09:07:29.495781       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gf4kq"
	I1206 09:07:29.503922       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-dg5nl"
	I1206 09:07:29.511758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="266.929725ms"
	I1206 09:07:29.542399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.583509ms"
	I1206 09:07:29.542580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.417µs"
	I1206 09:07:29.869377       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1206 09:07:29.880846       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-dg5nl"
	I1206 09:07:29.890352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.247374ms"
	I1206 09:07:29.896764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.355693ms"
	I1206 09:07:29.896954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.561µs"
	I1206 09:07:41.677970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.561µs"
	I1206 09:07:41.696538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.848µs"
	I1206 09:07:42.951552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.305µs"
	I1206 09:07:43.000606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.880562ms"
	I1206 09:07:43.002133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="154.78µs"
	I1206 09:07:43.653339       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [da79272ef7f974d56269b85ea87a8aa7dc05e98896e5d0df657dbb07dbd45f00] <==
	I1206 09:07:29.775383       1 server_others.go:69] "Using iptables proxy"
	I1206 09:07:29.787031       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1206 09:07:29.814675       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:07:29.818043       1 server_others.go:152] "Using iptables Proxier"
	I1206 09:07:29.820074       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 09:07:29.820114       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 09:07:29.820156       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 09:07:29.820437       1 server.go:846] "Version info" version="v1.28.0"
	I1206 09:07:29.820450       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:07:29.821283       1 config.go:97] "Starting endpoint slice config controller"
	I1206 09:07:29.822454       1 config.go:315] "Starting node config controller"
	I1206 09:07:29.822480       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 09:07:29.823601       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 09:07:29.823772       1 config.go:188] "Starting service config controller"
	I1206 09:07:29.823822       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 09:07:29.923038       1 shared_informer.go:318] Caches are synced for node config
	I1206 09:07:29.924230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 09:07:29.925548       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [36681eb1f5650d16e097f3dd1f2648e12526b4699903e79bfc7270c5a1e4afe4] <==
	W1206 09:07:13.047070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 09:07:13.047110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 09:07:13.047123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 09:07:13.047125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 09:07:13.047178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 09:07:13.047232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 09:07:13.047214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 09:07:13.047308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 09:07:13.047358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 09:07:13.047315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 09:07:13.915276       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 09:07:13.915312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 09:07:13.935627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 09:07:13.935654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1206 09:07:13.984270       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 09:07:13.984309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 09:07:14.122424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 09:07:14.122462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 09:07:14.142780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 09:07:14.142817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 09:07:14.154195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 09:07:14.154226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 09:07:14.258795       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 09:07:14.258832       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1206 09:07:17.242248       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 06 09:07:28 old-k8s-version-322324 kubelet[1415]: I1206 09:07:28.560929    1415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.358633    1415 topology_manager.go:215] "Topology Admit Handler" podUID="b3999369-84b8-4a7f-b999-5305a89ad2ef" podNamespace="kube-system" podName="kindnet-fn4nn"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.360590    1415 topology_manager.go:215] "Topology Admit Handler" podUID="6e52eb74-1c28-4573-b5be-93a2b28646f5" podNamespace="kube-system" podName="kube-proxy-pspsz"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444631    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvk6\" (UniqueName: \"kubernetes.io/projected/6e52eb74-1c28-4573-b5be-93a2b28646f5-kube-api-access-6jvk6\") pod \"kube-proxy-pspsz\" (UID: \"6e52eb74-1c28-4573-b5be-93a2b28646f5\") " pod="kube-system/kube-proxy-pspsz"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444698    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3999369-84b8-4a7f-b999-5305a89ad2ef-cni-cfg\") pod \"kindnet-fn4nn\" (UID: \"b3999369-84b8-4a7f-b999-5305a89ad2ef\") " pod="kube-system/kindnet-fn4nn"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444763    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrkf\" (UniqueName: \"kubernetes.io/projected/b3999369-84b8-4a7f-b999-5305a89ad2ef-kube-api-access-4wrkf\") pod \"kindnet-fn4nn\" (UID: \"b3999369-84b8-4a7f-b999-5305a89ad2ef\") " pod="kube-system/kindnet-fn4nn"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444819    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e52eb74-1c28-4573-b5be-93a2b28646f5-xtables-lock\") pod \"kube-proxy-pspsz\" (UID: \"6e52eb74-1c28-4573-b5be-93a2b28646f5\") " pod="kube-system/kube-proxy-pspsz"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444855    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3999369-84b8-4a7f-b999-5305a89ad2ef-lib-modules\") pod \"kindnet-fn4nn\" (UID: \"b3999369-84b8-4a7f-b999-5305a89ad2ef\") " pod="kube-system/kindnet-fn4nn"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444893    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e52eb74-1c28-4573-b5be-93a2b28646f5-kube-proxy\") pod \"kube-proxy-pspsz\" (UID: \"6e52eb74-1c28-4573-b5be-93a2b28646f5\") " pod="kube-system/kube-proxy-pspsz"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444925    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e52eb74-1c28-4573-b5be-93a2b28646f5-lib-modules\") pod \"kube-proxy-pspsz\" (UID: \"6e52eb74-1c28-4573-b5be-93a2b28646f5\") " pod="kube-system/kube-proxy-pspsz"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.444954    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3999369-84b8-4a7f-b999-5305a89ad2ef-xtables-lock\") pod \"kindnet-fn4nn\" (UID: \"b3999369-84b8-4a7f-b999-5305a89ad2ef\") " pod="kube-system/kindnet-fn4nn"
	Dec 06 09:07:29 old-k8s-version-322324 kubelet[1415]: I1206 09:07:29.925083    1415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pspsz" podStartSLOduration=0.925026257 podCreationTimestamp="2025-12-06 09:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:29.924901437 +0000 UTC m=+14.150382036" watchObservedRunningTime="2025-12-06 09:07:29.925026257 +0000 UTC m=+14.150506927"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.649199    1415 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.676072    1415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fn4nn" podStartSLOduration=11.326524485 podCreationTimestamp="2025-12-06 09:07:29 +0000 UTC" firstStartedPulling="2025-12-06 09:07:29.674947705 +0000 UTC m=+13.900428296" lastFinishedPulling="2025-12-06 09:07:31.024403871 +0000 UTC m=+15.249884471" observedRunningTime="2025-12-06 09:07:31.926890957 +0000 UTC m=+16.152371558" watchObservedRunningTime="2025-12-06 09:07:41.67598066 +0000 UTC m=+25.901461263"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.676482    1415 topology_manager.go:215] "Topology Admit Handler" podUID="e6100832-c99a-456e-b2d0-359f940bfa8a" podNamespace="kube-system" podName="storage-provisioner"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.677329    1415 topology_manager.go:215] "Topology Admit Handler" podUID="349bf4f7-a7c8-45cb-a55f-cfad0698bfac" podNamespace="kube-system" podName="coredns-5dd5756b68-gf4kq"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.737320    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/349bf4f7-a7c8-45cb-a55f-cfad0698bfac-kube-api-access-8c7g2\") pod \"coredns-5dd5756b68-gf4kq\" (UID: \"349bf4f7-a7c8-45cb-a55f-cfad0698bfac\") " pod="kube-system/coredns-5dd5756b68-gf4kq"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.737417    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hv9\" (UniqueName: \"kubernetes.io/projected/e6100832-c99a-456e-b2d0-359f940bfa8a-kube-api-access-82hv9\") pod \"storage-provisioner\" (UID: \"e6100832-c99a-456e-b2d0-359f940bfa8a\") " pod="kube-system/storage-provisioner"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.737467    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/349bf4f7-a7c8-45cb-a55f-cfad0698bfac-config-volume\") pod \"coredns-5dd5756b68-gf4kq\" (UID: \"349bf4f7-a7c8-45cb-a55f-cfad0698bfac\") " pod="kube-system/coredns-5dd5756b68-gf4kq"
	Dec 06 09:07:41 old-k8s-version-322324 kubelet[1415]: I1206 09:07:41.737501    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6100832-c99a-456e-b2d0-359f940bfa8a-tmp\") pod \"storage-provisioner\" (UID: \"e6100832-c99a-456e-b2d0-359f940bfa8a\") " pod="kube-system/storage-provisioner"
	Dec 06 09:07:42 old-k8s-version-322324 kubelet[1415]: I1206 09:07:42.962515    1415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gf4kq" podStartSLOduration=13.962459041 podCreationTimestamp="2025-12-06 09:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:42.951226868 +0000 UTC m=+27.176707489" watchObservedRunningTime="2025-12-06 09:07:42.962459041 +0000 UTC m=+27.187939643"
	Dec 06 09:07:42 old-k8s-version-322324 kubelet[1415]: I1206 09:07:42.981428    1415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.981383291 podCreationTimestamp="2025-12-06 09:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:42.963785849 +0000 UTC m=+27.189266450" watchObservedRunningTime="2025-12-06 09:07:42.981383291 +0000 UTC m=+27.206863956"
	Dec 06 09:07:45 old-k8s-version-322324 kubelet[1415]: I1206 09:07:45.034119    1415 topology_manager.go:215] "Topology Admit Handler" podUID="b89bf94a-47a6-4b25-9cea-c82defe85ad0" podNamespace="default" podName="busybox"
	Dec 06 09:07:45 old-k8s-version-322324 kubelet[1415]: I1206 09:07:45.057360    1415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtsp\" (UniqueName: \"kubernetes.io/projected/b89bf94a-47a6-4b25-9cea-c82defe85ad0-kube-api-access-cvtsp\") pod \"busybox\" (UID: \"b89bf94a-47a6-4b25-9cea-c82defe85ad0\") " pod="default/busybox"
	Dec 06 09:07:46 old-k8s-version-322324 kubelet[1415]: I1206 09:07:46.961720    1415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.592090901 podCreationTimestamp="2025-12-06 09:07:45 +0000 UTC" firstStartedPulling="2025-12-06 09:07:45.355674833 +0000 UTC m=+29.581155425" lastFinishedPulling="2025-12-06 09:07:46.725249098 +0000 UTC m=+30.950729680" observedRunningTime="2025-12-06 09:07:46.961553811 +0000 UTC m=+31.187034413" watchObservedRunningTime="2025-12-06 09:07:46.961665156 +0000 UTC m=+31.187145755"
	
	
	==> storage-provisioner [6e46d298307c911c98b8467defe48ed2f8eb34a0eb012f634c1d74f7195520e2] <==
	I1206 09:07:42.068234       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:07:42.080232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:07:42.080365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 09:07:42.089619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:07:42.089772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-322324_85340992-6423-42a4-a87b-0ac19ec43311!
	I1206 09:07:42.089762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12abded4-5e7f-4c39-bde6-291e3d08af94", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-322324_85340992-6423-42a4-a87b-0ac19ec43311 became leader
	I1206 09:07:42.190109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-322324_85340992-6423-42a4-a87b-0ac19ec43311!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-322324 -n old-k8s-version-322324
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-322324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (235.765927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:08:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-769733 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-769733 describe deploy/metrics-server -n kube-system: exit status 1 (56.959699ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-769733 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-769733
helpers_test.go:243: (dbg) docker inspect no-preload-769733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01",
	        "Created": "2025-12-06T09:07:11.630466318Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:07:11.663057492Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/hosts",
	        "LogPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01-json.log",
	        "Name": "/no-preload-769733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-769733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-769733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01",
	                "LowerDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-769733",
	                "Source": "/var/lib/docker/volumes/no-preload-769733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-769733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-769733",
	                "name.minikube.sigs.k8s.io": "no-preload-769733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e53e959cd867fbda1fe3fb4af104766df656795c7e62daa34d920ebacab3de4",
	            "SandboxKey": "/var/run/docker/netns/7e53e959cd86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-769733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d145c702ed0f08c782b680a462b6b5a0d8a60b36a26fd7d3512cd90419c2ab9",
	                    "EndpointID": "dbe2423514b910a196c6701fa5ba6754df22ffdd61a4ce6949d39b382be64ef5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "fa:9a:b7:3e:9f:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-769733",
	                        "2b0a9b7f20f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-769733 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-769733 logs -n 25: (1.142550815s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-646473 sudo docker system info                                                                                                                                                                                                      │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo containerd config dump                                                                                                                                                                                                  │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo crio config                                                                                                                                                                                                             │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ delete  │ -p cilium-646473                                                                                                                                                                                                                              │ cilium-646473          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:06 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ stop    │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p NoKubernetes-328079 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ -p NoKubernetes-328079 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ delete  │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733      │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-322324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-322324 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-322324 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-769733      │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:07:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:07:10.709356  255989 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:10.709447  255989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:10.709453  255989 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:10.709458  255989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:10.709680  255989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:07:10.710136  255989 out.go:368] Setting JSON to false
	I1206 09:07:10.711365  255989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2982,"bootTime":1765009049,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:07:10.711435  255989 start.go:143] virtualization: kvm guest
	I1206 09:07:10.714788  255989 out.go:179] * [no-preload-769733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:07:10.716499  255989 notify.go:221] Checking for updates...
	I1206 09:07:10.716689  255989 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:10.718341  255989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:10.719785  255989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:07:10.721101  255989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:07:10.722304  255989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:07:10.723594  255989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:10.725821  255989 config.go:182] Loaded profile config "kubernetes-upgrade-702638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:10.725979  255989 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:07:10.726135  255989 config.go:182] Loaded profile config "stopped-upgrade-454433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:07:10.726294  255989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:10.754235  255989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:07:10.754366  255989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:10.829452  255989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:07:10.818229436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:07:10.829558  255989 docker.go:319] overlay module found
	I1206 09:07:10.831340  255989 out.go:179] * Using the docker driver based on user configuration
	I1206 09:07:10.832660  255989 start.go:309] selected driver: docker
	I1206 09:07:10.832678  255989 start.go:927] validating driver "docker" against <nil>
	I1206 09:07:10.832692  255989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:10.833461  255989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:10.898785  255989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:07:10.887610946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:07:10.898982  255989 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:07:10.899263  255989 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:07:10.902086  255989 out.go:179] * Using Docker driver with root privileges
	I1206 09:07:10.903251  255989 cni.go:84] Creating CNI manager for ""
	I1206 09:07:10.903323  255989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:10.903337  255989 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:07:10.903436  255989 start.go:353] cluster config:
	{Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:10.904697  255989 out.go:179] * Starting "no-preload-769733" primary control-plane node in "no-preload-769733" cluster
	I1206 09:07:10.905800  255989 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:07:10.906910  255989 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:07:10.908038  255989 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:07:10.908117  255989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:07:10.908184  255989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/config.json ...
	I1206 09:07:10.908219  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/config.json: {Name:mk1cb5931b5ab0f876560fa78618e8bbf5d2b987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:10.908399  255989 cache.go:107] acquiring lock: {Name:mk3ec8e7f3239e63a4579f339a0b167cd40d12bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908408  255989 cache.go:107] acquiring lock: {Name:mk80da841620836604a4fb28eae69f74c14650a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908426  255989 cache.go:107] acquiring lock: {Name:mk00ae7798d573847547213a6282bfb842af8cd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908521  255989 cache.go:115] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1206 09:07:10.908524  255989 cache.go:107] acquiring lock: {Name:mk73f8905845e61a1676a39e5cfb18e7706db084 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908549  255989 cache.go:107] acquiring lock: {Name:mkbac531e41cac0c4d7d33feda6ddd5a2ba806cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908570  255989 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:10.908602  255989 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:10.908633  255989 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:10.908659  255989 cache.go:107] acquiring lock: {Name:mk53305a921f4ea2ac8a27c83edbdce617400bb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908708  255989 cache.go:115] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 09:07:10.908717  255989 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 61.935µs
	I1206 09:07:10.908738  255989 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 09:07:10.908751  255989 cache.go:107] acquiring lock: {Name:mke25c95d56fddc4c4597d3d7e7c1bb342b9d6b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908532  255989 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 110.991µs
	I1206 09:07:10.908812  255989 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 09:07:10.908807  255989 cache.go:107] acquiring lock: {Name:mk68870c832ef1623cfb9db003338cadec0ed3ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.908824  255989 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:10.908897  255989 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:10.909007  255989 cache.go:115] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 09:07:10.909021  255989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 627.944µs
	I1206 09:07:10.909029  255989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 09:07:10.910128  255989 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:10.910136  255989 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:10.910139  255989 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:10.910210  255989 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:10.910850  255989 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:10.931415  255989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:07:10.931444  255989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:07:10.931458  255989 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:07:10.931484  255989 start.go:360] acquireMachinesLock for no-preload-769733: {Name:mke00f2a24f1a50a1bc4fbc79c0044e9888e3bc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:07:10.931588  255989 start.go:364] duration metric: took 87.679µs to acquireMachinesLock for "no-preload-769733"
	I1206 09:07:10.931620  255989 start.go:93] Provisioning new machine with config: &{Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:07:10.931688  255989 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:07:05.958126  249953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:07:05.975441  249953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:07:05.992598  249953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:07:06.005053  249953 ssh_runner.go:195] Run: openssl version
	I1206 09:07:06.011129  249953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.018313  249953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:07:06.025307  249953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.028835  249953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.028885  249953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:06.063980  249953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:07:06.071649  249953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:07:06.079331  249953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.086388  249953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:07:06.093714  249953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.098066  249953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.098140  249953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:07:06.133370  249953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:07:06.141269  249953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:07:06.149116  249953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.156660  249953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:07:06.164113  249953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.167809  249953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.167857  249953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:07:06.208010  249953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:06.216384  249953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:06.224913  249953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:07:06.229486  249953 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:07:06.229544  249953 kubeadm.go:401] StartCluster: {Name:old-k8s-version-322324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-322324 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:06.229628  249953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:07:06.229693  249953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:07:06.261098  249953 cri.go:89] found id: ""
	I1206 09:07:06.261166  249953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:07:06.269664  249953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:07:06.277900  249953 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:07:06.277956  249953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:07:06.285702  249953 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:07:06.285724  249953 kubeadm.go:158] found existing configuration files:
	
	I1206 09:07:06.285768  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:07:06.294055  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:07:06.294129  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:07:06.302205  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:07:06.310953  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:07:06.311029  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:07:06.319190  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:07:06.326946  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:07:06.327021  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:07:06.334703  249953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:07:06.342686  249953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:07:06.342761  249953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:07:06.350939  249953 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:07:06.442484  249953 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:07:06.535383  249953 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:07:06.485916  222653 cri.go:89] found id: ""
	I1206 09:07:06.485944  222653 logs.go:282] 0 containers: []
	W1206 09:07:06.485954  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:06.485961  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:06.486062  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:06.524379  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:06.524418  222653 cri.go:89] found id: ""
	I1206 09:07:06.524429  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:06.524487  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:06.528543  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:06.528608  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:06.571768  222653 cri.go:89] found id: ""
	I1206 09:07:06.571789  222653 logs.go:282] 0 containers: []
	W1206 09:07:06.571796  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:06.571802  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:06.571861  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:06.618439  222653 cri.go:89] found id: ""
	I1206 09:07:06.618465  222653 logs.go:282] 0 containers: []
	W1206 09:07:06.618475  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:06.618486  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:06.618505  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:06.637866  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:06.637903  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:06.712930  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:06.712955  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:06.712971  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:06.755170  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:06.755199  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:06.830461  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:06.830540  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:06.870197  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:06.870225  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:06.928360  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:06.928393  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:06.971420  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:06.971455  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:09.589052  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:09.589521  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:09.589592  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:09.589650  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:09.630787  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:09.630815  222653 cri.go:89] found id: ""
	I1206 09:07:09.630825  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:09.630881  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.634973  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:09.635047  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:09.674953  222653 cri.go:89] found id: ""
	I1206 09:07:09.674983  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.675020  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:09.675029  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:09.675093  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:09.715329  222653 cri.go:89] found id: ""
	I1206 09:07:09.715357  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.715373  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:09.715381  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:09.715438  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:09.756013  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:09.756033  222653 cri.go:89] found id: ""
	I1206 09:07:09.756042  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:09.756105  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.760380  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:09.760448  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:09.807677  222653 cri.go:89] found id: ""
	I1206 09:07:09.807709  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.807721  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:09.807729  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:09.807786  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:09.852520  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:09.852546  222653 cri.go:89] found id: ""
	I1206 09:07:09.852556  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:09.852612  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.856776  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:09.856838  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:09.894073  222653 cri.go:89] found id: ""
	I1206 09:07:09.894098  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.894108  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:09.894115  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:09.894176  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:09.928380  222653 cri.go:89] found id: ""
	I1206 09:07:09.928416  222653 logs.go:282] 0 containers: []
	W1206 09:07:09.928426  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:09.928437  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:09.928455  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:10.024128  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:10.024160  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:10.041862  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:10.041890  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:10.104944  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:10.104967  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:10.104982  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:10.147088  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:10.147126  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:10.223802  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:10.223840  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:10.264387  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:10.264415  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:10.307815  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:10.307846  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:09.571042  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:09.571510  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:09.571578  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:09.571641  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:09.601395  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:09.601419  224160 cri.go:89] found id: ""
	I1206 09:07:09.601429  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:09.601484  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.605751  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:09.605820  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:09.635504  224160 cri.go:89] found id: ""
	I1206 09:07:09.635536  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.635546  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:09.635553  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:09.635604  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:09.664997  224160 cri.go:89] found id: ""
	I1206 09:07:09.665024  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.665037  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:09.665044  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:09.665102  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:09.695837  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:09.695855  224160 cri.go:89] found id: ""
	I1206 09:07:09.695862  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:09.695908  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.700576  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:09.700646  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:09.734255  224160 cri.go:89] found id: ""
	I1206 09:07:09.734282  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.734292  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:09.734300  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:09.734372  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:09.767137  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:09.767159  224160 cri.go:89] found id: ""
	I1206 09:07:09.767169  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:09.767316  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:09.772305  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:09.772383  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:09.815271  224160 cri.go:89] found id: ""
	I1206 09:07:09.815295  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.815307  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:09.815315  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:09.815392  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:09.845233  224160 cri.go:89] found id: ""
	I1206 09:07:09.845261  224160 logs.go:282] 0 containers: []
	W1206 09:07:09.845273  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:09.845283  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:09.845295  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:09.955042  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:09.955071  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:09.968817  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:09.968840  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:10.025635  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:10.025656  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:10.025672  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:10.059181  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:10.059207  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:10.088126  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:10.088164  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:10.115652  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:10.115675  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:10.174448  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:10.174492  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:10.934340  255989 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:07:10.934550  255989 start.go:159] libmachine.API.Create for "no-preload-769733" (driver="docker")
	I1206 09:07:10.934581  255989 client.go:173] LocalClient.Create starting
	I1206 09:07:10.934649  255989 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:07:10.934683  255989 main.go:143] libmachine: Decoding PEM data...
	I1206 09:07:10.934702  255989 main.go:143] libmachine: Parsing certificate...
	I1206 09:07:10.934756  255989 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:07:10.934781  255989 main.go:143] libmachine: Decoding PEM data...
	I1206 09:07:10.934793  255989 main.go:143] libmachine: Parsing certificate...
	I1206 09:07:10.935168  255989 cli_runner.go:164] Run: docker network inspect no-preload-769733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:07:10.954867  255989 cli_runner.go:211] docker network inspect no-preload-769733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:07:10.954956  255989 network_create.go:284] running [docker network inspect no-preload-769733] to gather additional debugging logs...
	I1206 09:07:10.954979  255989 cli_runner.go:164] Run: docker network inspect no-preload-769733
	W1206 09:07:10.972559  255989 cli_runner.go:211] docker network inspect no-preload-769733 returned with exit code 1
	I1206 09:07:10.972585  255989 network_create.go:287] error running [docker network inspect no-preload-769733]: docker network inspect no-preload-769733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-769733 not found
	I1206 09:07:10.972604  255989 network_create.go:289] output of [docker network inspect no-preload-769733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-769733 not found
	
	** /stderr **
	I1206 09:07:10.972688  255989 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:07:10.992148  255989 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:07:10.992902  255989 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:07:10.993639  255989 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:07:10.994174  255989 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f6aeaf0351aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:f6:31:65:11:00} reservation:<nil>}
	I1206 09:07:10.994572  255989 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6a656c6b5a08 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:de:88:d9:0b:15} reservation:<nil>}
	I1206 09:07:10.995179  255989 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06c80}
	I1206 09:07:10.995205  255989 network_create.go:124] attempt to create docker network no-preload-769733 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:07:10.995259  255989 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-769733 no-preload-769733
	I1206 09:07:11.048530  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:07:11.049374  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:11.050720  255989 network_create.go:108] docker network no-preload-769733 192.168.94.0/24 created
	I1206 09:07:11.050749  255989 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-769733" container
	I1206 09:07:11.050814  255989 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:07:11.052113  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:11.063376  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:11.072166  255989 cli_runner.go:164] Run: docker volume create no-preload-769733 --label name.minikube.sigs.k8s.io=no-preload-769733 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:07:11.091902  255989 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:07:11.092807  255989 oci.go:103] Successfully created a docker volume no-preload-769733
	I1206 09:07:11.092871  255989 cli_runner.go:164] Run: docker run --rm --name no-preload-769733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-769733 --entrypoint /usr/bin/test -v no-preload-769733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:07:11.509857  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 09:07:11.509888  255989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 601.084495ms
	I1206 09:07:11.509901  255989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 09:07:11.553024  255989 oci.go:107] Successfully prepared a docker volume no-preload-769733
	I1206 09:07:11.553072  255989 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1206 09:07:11.553158  255989 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:07:11.553193  255989 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:07:11.553248  255989 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:07:11.612419  255989 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-769733 --name no-preload-769733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-769733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-769733 --network no-preload-769733 --ip 192.168.94.2 --volume no-preload-769733:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:07:11.904119  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Running}}
	I1206 09:07:11.926219  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:11.950273  255989 cli_runner.go:164] Run: docker exec no-preload-769733 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:07:11.998160  255989 oci.go:144] the created container "no-preload-769733" has a running status.
	I1206 09:07:11.998193  255989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa...
	I1206 09:07:12.035252  255989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:07:12.069183  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:12.110255  255989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:07:12.110278  255989 kic_runner.go:114] Args: [docker exec --privileged no-preload-769733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:07:12.184227  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:12.214036  255989 machine.go:94] provisionDockerMachine start ...
	I1206 09:07:12.214174  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:12.249194  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:12.249532  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:12.249557  255989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:07:12.250421  255989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46140->127.0.0.1:33063: read: connection reset by peer
	I1206 09:07:12.274681  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 09:07:12.274723  255989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.366170347s
	I1206 09:07:12.274748  255989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 09:07:12.320591  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 09:07:12.320637  255989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.412254947s
	I1206 09:07:12.320659  255989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 09:07:12.328259  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 09:07:12.328297  255989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.419775507s
	I1206 09:07:12.328314  255989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 09:07:12.397930  255989 cache.go:157] /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 09:07:12.397970  255989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.489218658s
	I1206 09:07:12.398002  255989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 09:07:12.398024  255989 cache.go:87] Successfully saved all images to host disk.
	I1206 09:07:15.378907  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-769733
	
	I1206 09:07:15.378937  255989 ubuntu.go:182] provisioning hostname "no-preload-769733"
	I1206 09:07:15.379012  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.397856  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:15.398133  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:15.398154  255989 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-769733 && echo "no-preload-769733" | sudo tee /etc/hostname
	I1206 09:07:15.534926  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-769733
	
	I1206 09:07:15.535036  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.553256  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:15.553499  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:15.553520  255989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-769733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-769733/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-769733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:07:15.687658  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:07:15.687698  255989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:07:15.687722  255989 ubuntu.go:190] setting up certificates
	I1206 09:07:15.687731  255989 provision.go:84] configureAuth start
	I1206 09:07:15.687787  255989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-769733
	I1206 09:07:15.705634  255989 provision.go:143] copyHostCerts
	I1206 09:07:15.705707  255989 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:07:15.705724  255989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:07:15.705818  255989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:07:15.705933  255989 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:07:15.705949  255989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:07:15.706010  255989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:07:15.706122  255989 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:07:15.706133  255989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:07:15.706169  255989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:07:15.706239  255989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.no-preload-769733 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-769733]
	I1206 09:07:15.999796  249953 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1206 09:07:15.999899  249953 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:07:16.000032  249953 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:07:16.000197  249953 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:07:16.000253  249953 kubeadm.go:319] OS: Linux
	I1206 09:07:16.000313  249953 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:07:16.000374  249953 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:07:16.000444  249953 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:07:16.000511  249953 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:07:16.000624  249953 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:07:16.000693  249953 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:07:16.000760  249953 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:07:16.000839  249953 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:07:16.000930  249953 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:07:16.001073  249953 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:07:16.001184  249953 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 09:07:16.001261  249953 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:07:16.003935  249953 out.go:252]   - Generating certificates and keys ...
	I1206 09:07:16.004046  249953 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:07:16.004143  249953 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:07:16.004240  249953 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:07:16.004336  249953 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:07:16.004434  249953 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:07:16.004503  249953 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:07:16.004582  249953 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:07:16.004759  249953 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-322324] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:07:16.004859  249953 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:07:16.005051  249953 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-322324] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:07:16.005141  249953 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:07:16.005259  249953 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:07:16.005361  249953 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:07:16.005446  249953 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:07:16.005524  249953 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:07:16.005600  249953 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:07:16.005695  249953 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:07:16.005781  249953 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:07:16.005908  249953 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:07:16.006082  249953 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:07:16.007458  249953 out.go:252]   - Booting up control plane ...
	I1206 09:07:16.007579  249953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:07:16.007685  249953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:07:16.007791  249953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:07:16.007951  249953 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:07:16.008159  249953 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:07:16.008236  249953 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:07:16.008491  249953 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 09:07:16.008629  249953 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002384 seconds
	I1206 09:07:16.008764  249953 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:07:16.008923  249953 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:07:16.009039  249953 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:07:16.009314  249953 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-322324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:07:16.009399  249953 kubeadm.go:319] [bootstrap-token] Using token: o8hb1i.ymis9idm9gbc71mk
	I1206 09:07:16.011218  249953 out.go:252]   - Configuring RBAC rules ...
	I1206 09:07:16.011338  249953 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:07:16.011428  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:07:16.011616  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:07:16.011779  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:07:16.011960  249953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:07:16.012107  249953 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:07:16.012298  249953 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:07:16.012369  249953 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:07:16.012411  249953 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:07:16.012418  249953 kubeadm.go:319] 
	I1206 09:07:16.012479  249953 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:07:16.012488  249953 kubeadm.go:319] 
	I1206 09:07:16.012635  249953 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:07:16.012651  249953 kubeadm.go:319] 
	I1206 09:07:16.012696  249953 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:07:16.012927  249953 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:07:16.013047  249953 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:07:16.013067  249953 kubeadm.go:319] 
	I1206 09:07:16.013170  249953 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:07:16.013180  249953 kubeadm.go:319] 
	I1206 09:07:16.013242  249953 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:07:16.013251  249953 kubeadm.go:319] 
	I1206 09:07:16.013351  249953 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:07:16.013452  249953 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:07:16.013531  249953 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:07:16.013541  249953 kubeadm.go:319] 
	I1206 09:07:16.013653  249953 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:07:16.013746  249953 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:07:16.013755  249953 kubeadm.go:319] 
	I1206 09:07:16.013876  249953 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o8hb1i.ymis9idm9gbc71mk \
	I1206 09:07:16.014056  249953 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:07:16.014091  249953 kubeadm.go:319] 	--control-plane 
	I1206 09:07:16.014097  249953 kubeadm.go:319] 
	I1206 09:07:16.014205  249953 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:07:16.014220  249953 kubeadm.go:319] 
	I1206 09:07:16.014349  249953 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o8hb1i.ymis9idm9gbc71mk \
	I1206 09:07:16.014488  249953 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:07:16.014506  249953 cni.go:84] Creating CNI manager for ""
	I1206 09:07:16.014513  249953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:16.016805  249953 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:07:12.850241  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:12.850658  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:12.850714  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:12.850761  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:12.919262  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:12.919285  222653 cri.go:89] found id: ""
	I1206 09:07:12.919344  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:12.919425  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.924031  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:12.924088  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:12.962588  222653 cri.go:89] found id: ""
	I1206 09:07:12.962613  222653 logs.go:282] 0 containers: []
	W1206 09:07:12.962621  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:12.962628  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:12.962679  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:13.029685  222653 cri.go:89] found id: ""
	I1206 09:07:13.029710  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.029719  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:13.029728  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:13.029780  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:13.069420  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:13.069444  222653 cri.go:89] found id: ""
	I1206 09:07:13.069455  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:13.069511  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:13.073704  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:13.073763  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:13.121197  222653 cri.go:89] found id: ""
	I1206 09:07:13.121223  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.121233  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:13.121241  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:13.121303  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:13.166008  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:13.166032  222653 cri.go:89] found id: ""
	I1206 09:07:13.166042  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:13.166125  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:13.170532  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:13.170611  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:13.207075  222653 cri.go:89] found id: ""
	I1206 09:07:13.207102  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.207112  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:13.207120  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:13.207178  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:13.242714  222653 cri.go:89] found id: ""
	I1206 09:07:13.242739  222653 logs.go:282] 0 containers: []
	W1206 09:07:13.242750  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:13.242760  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:13.242774  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:13.304263  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:13.304284  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:13.304295  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:13.345157  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:13.345189  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:13.415592  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:13.415624  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:13.449901  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:13.449927  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:13.493945  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:13.493974  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:13.531541  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:13.531562  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:13.622614  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:13.622648  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:16.139891  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:16.140406  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:16.140464  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:16.140522  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:16.180136  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:16.180160  222653 cri.go:89] found id: ""
	I1206 09:07:16.180171  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:16.180228  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.184945  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:16.185030  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:16.225521  222653 cri.go:89] found id: ""
	I1206 09:07:16.225550  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.225561  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:16.225568  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:16.225619  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:16.285459  222653 cri.go:89] found id: ""
	I1206 09:07:16.285490  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.285499  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:16.285507  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:16.285567  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:16.328689  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:16.328711  222653 cri.go:89] found id: ""
	I1206 09:07:16.328721  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:16.328776  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.332610  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:16.332676  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:16.367770  222653 cri.go:89] found id: ""
	I1206 09:07:16.367796  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.367807  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:16.367815  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:16.367870  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:16.406206  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:16.406231  222653 cri.go:89] found id: ""
	I1206 09:07:16.406242  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:16.406294  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.410111  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:16.410189  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:12.706781  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:12.707242  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:12.707304  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:12.707428  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:12.744698  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:12.744725  224160 cri.go:89] found id: ""
	I1206 09:07:12.744735  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:12.744784  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.749332  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:12.749402  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:12.777451  224160 cri.go:89] found id: ""
	I1206 09:07:12.777480  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.777492  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:12.777507  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:12.777572  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:12.805458  224160 cri.go:89] found id: ""
	I1206 09:07:12.805490  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.805502  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:12.805510  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:12.805567  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:12.838189  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:12.838210  224160 cri.go:89] found id: ""
	I1206 09:07:12.838218  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:12.838262  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.843559  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:12.843638  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:12.890786  224160 cri.go:89] found id: ""
	I1206 09:07:12.890816  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.890852  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:12.890861  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:12.892037  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:12.926302  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:12.926321  224160 cri.go:89] found id: ""
	I1206 09:07:12.926331  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:12.926385  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:12.930343  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:12.930401  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:12.960946  224160 cri.go:89] found id: ""
	I1206 09:07:12.960966  224160 logs.go:282] 0 containers: []
	W1206 09:07:12.960974  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:12.960980  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:12.961048  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:13.006621  224160 cri.go:89] found id: ""
	I1206 09:07:13.006664  224160 logs.go:282] 0 containers: []
	W1206 09:07:13.006674  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:13.006685  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:13.006699  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:13.049247  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:13.049282  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:13.120614  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:13.120656  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:13.160599  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:13.160635  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:13.246350  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:13.246379  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:13.260674  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:13.260697  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:13.320787  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:13.320805  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:13.320816  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:13.352599  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:13.352626  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:15.884062  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:15.884422  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:15.884480  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:15.884540  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:15.911845  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:15.911867  224160 cri.go:89] found id: ""
	I1206 09:07:15.911876  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:15.911928  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:15.915912  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:15.916013  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:15.945043  224160 cri.go:89] found id: ""
	I1206 09:07:15.945069  224160 logs.go:282] 0 containers: []
	W1206 09:07:15.945081  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:15.945088  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:15.945152  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:15.983423  224160 cri.go:89] found id: ""
	I1206 09:07:15.983451  224160 logs.go:282] 0 containers: []
	W1206 09:07:15.983462  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:15.983469  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:15.983522  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:16.018180  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:16.018198  224160 cri.go:89] found id: ""
	I1206 09:07:16.018208  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:16.018257  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.022266  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:16.022328  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:16.052872  224160 cri.go:89] found id: ""
	I1206 09:07:16.052897  224160 logs.go:282] 0 containers: []
	W1206 09:07:16.052907  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:16.052916  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:16.052972  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:16.082270  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:16.082292  224160 cri.go:89] found id: ""
	I1206 09:07:16.082301  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:16.082357  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.086421  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:16.086486  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:16.115829  224160 cri.go:89] found id: ""
	I1206 09:07:16.115855  224160 logs.go:282] 0 containers: []
	W1206 09:07:16.115866  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:16.115874  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:16.115930  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:16.144090  224160 cri.go:89] found id: ""
	I1206 09:07:16.144126  224160 logs.go:282] 0 containers: []
	W1206 09:07:16.144136  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:16.144148  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:16.144168  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:16.178435  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:16.178462  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:16.244309  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:16.244345  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:16.291793  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:16.291818  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:16.414761  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:16.414794  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:16.429889  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:16.429920  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:16.498222  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:16.498243  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:16.498258  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:16.539036  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:16.539070  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:15.748318  255989 provision.go:177] copyRemoteCerts
	I1206 09:07:15.748380  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:07:15.748412  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.767082  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:15.875466  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:07:15.898225  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:07:15.917318  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:07:15.935203  255989 provision.go:87] duration metric: took 247.458267ms to configureAuth
	I1206 09:07:15.935232  255989 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:07:15.935432  255989 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:15.935541  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:15.955701  255989 main.go:143] libmachine: Using SSH client type: native
	I1206 09:07:15.955897  255989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1206 09:07:15.955913  255989 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:07:16.274227  255989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:07:16.274255  255989 machine.go:97] duration metric: took 4.060196139s to provisionDockerMachine
	I1206 09:07:16.274286  255989 client.go:176] duration metric: took 5.339676742s to LocalClient.Create
	I1206 09:07:16.274309  255989 start.go:167] duration metric: took 5.33975868s to libmachine.API.Create "no-preload-769733"
	I1206 09:07:16.274321  255989 start.go:293] postStartSetup for "no-preload-769733" (driver="docker")
	I1206 09:07:16.274343  255989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:07:16.274416  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:07:16.274471  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.297640  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.398592  255989 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:07:16.403160  255989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:07:16.403196  255989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:07:16.403209  255989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:07:16.403269  255989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:07:16.403451  255989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:07:16.403583  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:07:16.412336  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:07:16.435598  255989 start.go:296] duration metric: took 161.262623ms for postStartSetup
	I1206 09:07:16.436050  255989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-769733
	I1206 09:07:16.458102  255989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/config.json ...
	I1206 09:07:16.458396  255989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:07:16.458448  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.482632  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.582121  255989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:07:16.587163  255989 start.go:128] duration metric: took 5.655460621s to createHost
	I1206 09:07:16.587197  255989 start.go:83] releasing machines lock for "no-preload-769733", held for 5.655591978s
	I1206 09:07:16.587271  255989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-769733
	I1206 09:07:16.609854  255989 ssh_runner.go:195] Run: cat /version.json
	I1206 09:07:16.609907  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.609936  255989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:07:16.610034  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:16.632017  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.632365  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:16.789964  255989 ssh_runner.go:195] Run: systemctl --version
	I1206 09:07:16.797546  255989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:07:16.837652  255989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:07:16.843184  255989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:07:16.843241  255989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:07:16.875435  255989 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:07:16.875460  255989 start.go:496] detecting cgroup driver to use...
	I1206 09:07:16.875509  255989 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:07:16.875576  255989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:07:16.898701  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:07:16.914622  255989 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:07:16.914693  255989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:07:16.936570  255989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:07:16.966653  255989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:07:17.061144  255989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:07:17.146251  255989 docker.go:234] disabling docker service ...
	I1206 09:07:17.146314  255989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:07:17.165775  255989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:07:17.178776  255989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:07:17.262233  255989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:07:17.344760  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:07:17.357901  255989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:07:17.372631  255989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:07:17.372689  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.383601  255989 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:07:17.383675  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.393092  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.402233  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.411567  255989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:07:17.420388  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.430003  255989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.444618  255989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:07:17.453876  255989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:07:17.462270  255989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:07:17.470410  255989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:17.552642  255989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:07:17.697306  255989 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:07:17.697384  255989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:07:17.701411  255989 start.go:564] Will wait 60s for crictl version
	I1206 09:07:17.701458  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:17.705201  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:07:17.730717  255989 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:07:17.730804  255989 ssh_runner.go:195] Run: crio --version
	I1206 09:07:17.758759  255989 ssh_runner.go:195] Run: crio --version
	I1206 09:07:17.788391  255989 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 09:07:17.789487  255989 cli_runner.go:164] Run: docker network inspect no-preload-769733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:07:17.807311  255989 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:07:17.811445  255989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:07:17.821852  255989 kubeadm.go:884] updating cluster {Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:07:17.821972  255989 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:07:17.822034  255989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:07:17.845595  255989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 09:07:17.845620  255989 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 09:07:17.845731  255989 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:17.845764  255989 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:17.845800  255989 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:17.845728  255989 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:17.845763  255989 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:17.845749  255989 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:17.845766  255989 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1206 09:07:17.845740  255989 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:17.846925  255989 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:17.846942  255989 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:17.846945  255989 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:17.846950  255989 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:17.846949  255989 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1206 09:07:17.846934  255989 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:17.846951  255989 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:17.846925  255989 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:17.961600  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:17.970315  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:17.972149  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:17.975463  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:17.989933  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:17.995032  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1206 09:07:17.996719  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.008328  255989 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1206 09:07:18.008378  255989 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.008429  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.014677  255989 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1206 09:07:18.014720  255989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.014789  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.017818  255989 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1206 09:07:18.017869  255989 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.017915  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067676  255989 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1206 09:07:18.067720  255989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.067718  255989 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1206 09:07:18.067751  255989 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1206 09:07:18.067770  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067790  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067685  255989 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1206 09:07:18.067822  255989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.067837  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.067748  255989 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1206 09:07:18.067878  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.067878  255989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.067918  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.067801  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.067858  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:18.072890  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.072898  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:07:18.100606  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.100675  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.100731  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.100773  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.100689  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.106162  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:07:18.106175  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.137196  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.137771  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:07:18.140334  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:07:18.142944  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:07:18.143036  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:07:18.143059  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:07:18.142945  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.174636  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:07:18.174748  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.174838  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.177808  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:07:18.177946  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:07:18.180760  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:07:18.180768  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1206 09:07:18.180836  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:07:18.181397  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1206 09:07:18.181424  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:07:18.181501  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1206 09:07:18.181509  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:07:18.202855  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:18.202958  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:18.203009  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1206 09:07:18.202959  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.203039  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1206 09:07:18.203058  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1206 09:07:18.214856  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:18.214904  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.214929  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1206 09:07:18.214862  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1206 09:07:18.214956  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:18.214958  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1206 09:07:18.214958  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1206 09:07:18.214974  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1206 09:07:18.215041  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.215067  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1206 09:07:18.348891  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1206 09:07:18.348933  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1206 09:07:18.376009  255989 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1206 09:07:18.376089  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1206 09:07:18.858609  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1206 09:07:18.858655  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.858701  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:07:18.910941  255989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:20.016886  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.158156195s)
	I1206 09:07:20.016916  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1206 09:07:20.016944  255989 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:07:20.016954  255989 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.105980936s)
	I1206 09:07:20.017037  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:07:20.017107  255989 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 09:07:20.017156  255989 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:20.017210  255989 ssh_runner.go:195] Run: which crictl
	I1206 09:07:16.017979  249953 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:07:16.022210  249953 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1206 09:07:16.022226  249953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:07:16.035082  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:07:16.805703  249953 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:07:16.805781  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:16.805811  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-322324 minikube.k8s.io/updated_at=2025_12_06T09_07_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=old-k8s-version-322324 minikube.k8s.io/primary=true
	I1206 09:07:16.816361  249953 ops.go:34] apiserver oom_adj: -16
	I1206 09:07:16.907267  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:17.408200  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:17.908280  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:18.407683  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:18.908231  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:19.407851  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:19.908197  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:20.408163  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:20.907613  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:16.451544  222653 cri.go:89] found id: ""
	I1206 09:07:16.451572  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.451582  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:16.451590  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:16.451648  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:16.493766  222653 cri.go:89] found id: ""
	I1206 09:07:16.493794  222653 logs.go:282] 0 containers: []
	W1206 09:07:16.493805  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:16.493815  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:16.493830  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:16.600529  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:16.600563  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:16.618829  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:16.618862  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:16.691485  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:16.691505  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:16.691519  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:16.733495  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:16.733529  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:16.810525  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:16.810554  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:16.853117  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:16.853193  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:16.916407  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:16.916436  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:19.476980  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:19.477495  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:19.477547  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:19.477604  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:19.523474  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:19.523499  222653 cri.go:89] found id: ""
	I1206 09:07:19.523510  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:19.523564  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.528345  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:19.528414  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:19.574594  222653 cri.go:89] found id: ""
	I1206 09:07:19.574624  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.574635  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:19.574643  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:19.574699  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:19.616375  222653 cri.go:89] found id: ""
	I1206 09:07:19.616403  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.616414  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:19.616423  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:19.616482  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:19.663286  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:19.663312  222653 cri.go:89] found id: ""
	I1206 09:07:19.663321  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:19.663385  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.668564  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:19.668634  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:19.714113  222653 cri.go:89] found id: ""
	I1206 09:07:19.714139  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.714150  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:19.714157  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:19.714211  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:19.756842  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:19.756874  222653 cri.go:89] found id: ""
	I1206 09:07:19.756885  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:19.756950  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.761470  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:19.761549  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:19.799903  222653 cri.go:89] found id: ""
	I1206 09:07:19.799925  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.799934  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:19.799946  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:19.800011  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:19.836747  222653 cri.go:89] found id: ""
	I1206 09:07:19.836791  222653 logs.go:282] 0 containers: []
	W1206 09:07:19.836800  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:19.836809  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:19.836824  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:19.875524  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:19.875557  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:19.931634  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:19.931672  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:19.981752  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:19.981785  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:20.076548  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:20.076582  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:20.094654  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:20.094686  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:20.162170  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:20.162188  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:20.162200  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:20.202009  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:20.202079  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:19.069352  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:19.069765  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:19.069832  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:19.069880  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:19.095516  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:19.095539  224160 cri.go:89] found id: ""
	I1206 09:07:19.095547  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:19.095602  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.099652  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:19.099713  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:19.125980  224160 cri.go:89] found id: ""
	I1206 09:07:19.126028  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.126037  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:19.126044  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:19.126116  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:19.157560  224160 cri.go:89] found id: ""
	I1206 09:07:19.157585  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.157596  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:19.157603  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:19.157662  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:19.185043  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:19.185072  224160 cri.go:89] found id: ""
	I1206 09:07:19.185082  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:19.185140  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.189218  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:19.189278  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:19.216149  224160 cri.go:89] found id: ""
	I1206 09:07:19.216176  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.216188  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:19.216196  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:19.216256  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:19.248358  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:19.248382  224160 cri.go:89] found id: ""
	I1206 09:07:19.248391  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:19.248447  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:19.253303  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:19.253360  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:19.282404  224160 cri.go:89] found id: ""
	I1206 09:07:19.282435  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.282447  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:19.282455  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:19.282519  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:19.312767  224160 cri.go:89] found id: ""
	I1206 09:07:19.312788  224160 logs.go:282] 0 containers: []
	W1206 09:07:19.312796  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:19.312805  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:19.312815  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:19.343035  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:19.343069  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:19.419701  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:19.419793  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:19.458441  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:19.458478  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:19.577004  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:19.577055  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:19.594808  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:19.594845  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:19.668665  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:19.668688  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:19.668703  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:19.704110  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:19.704139  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:21.244467  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.227403916s)
	I1206 09:07:21.244501  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1206 09:07:21.244522  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:07:21.244520  255989 ssh_runner.go:235] Completed: which crictl: (1.227291112s)
	I1206 09:07:21.244574  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:07:21.244577  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:22.325694  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.081089159s)
	I1206 09:07:22.325734  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1206 09:07:22.325756  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:22.325811  255989 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.081136917s)
	I1206 09:07:22.325886  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:22.325819  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:07:22.355941  255989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:23.578476  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.252550319s)
	I1206 09:07:23.578515  255989 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.222546604s)
	I1206 09:07:23.578517  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1206 09:07:23.578541  255989 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:23.578546  255989 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 09:07:23.578580  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:07:23.578625  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:07:24.931278  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.352673931s)
	I1206 09:07:24.931299  255989 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.352652271s)
	I1206 09:07:24.931312  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1206 09:07:24.931326  255989 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 09:07:24.931340  255989 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:07:24.931345  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1206 09:07:24.931384  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:07:21.407485  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:21.907905  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:22.407632  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:22.908214  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:23.408097  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:23.907323  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:24.408230  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:24.908260  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:25.409130  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:25.907386  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:22.782066  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:22.782550  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:22.782621  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:22.782766  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:22.819387  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:22.819410  222653 cri.go:89] found id: ""
	I1206 09:07:22.819421  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:22.819477  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.824130  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:22.824204  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:22.867457  222653 cri.go:89] found id: ""
	I1206 09:07:22.867486  222653 logs.go:282] 0 containers: []
	W1206 09:07:22.867495  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:22.867503  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:22.867563  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:22.914264  222653 cri.go:89] found id: ""
	I1206 09:07:22.914290  222653 logs.go:282] 0 containers: []
	W1206 09:07:22.914301  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:22.914322  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:22.914380  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:22.954438  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:22.954465  222653 cri.go:89] found id: ""
	I1206 09:07:22.954475  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:22.954536  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.958805  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:22.958869  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:23.002279  222653 cri.go:89] found id: ""
	I1206 09:07:23.002308  222653 logs.go:282] 0 containers: []
	W1206 09:07:23.002318  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:23.002326  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:23.002388  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:23.039308  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:23.039342  222653 cri.go:89] found id: ""
	I1206 09:07:23.039353  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:23.039407  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:23.043416  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:23.043479  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:23.083536  222653 cri.go:89] found id: ""
	I1206 09:07:23.083558  222653 logs.go:282] 0 containers: []
	W1206 09:07:23.083565  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:23.083571  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:23.083627  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:23.119518  222653 cri.go:89] found id: ""
	I1206 09:07:23.119543  222653 logs.go:282] 0 containers: []
	W1206 09:07:23.119553  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:23.119563  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:23.119578  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:23.193995  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:23.194025  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:23.230380  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:23.230405  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:23.281194  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:23.281232  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:23.325158  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:23.325186  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:23.431223  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:23.431254  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:23.448934  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:23.448962  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:23.521617  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:23.521641  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:23.521656  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:26.062046  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:26.062490  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:26.062546  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:26.062599  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:26.104652  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:26.104672  222653 cri.go:89] found id: ""
	I1206 09:07:26.104681  222653 logs.go:282] 1 containers: [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:26.104737  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:26.108658  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:26.108727  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:26.148882  222653 cri.go:89] found id: ""
	I1206 09:07:26.148910  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.148920  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:26.148927  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:26.148984  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:26.187305  222653 cri.go:89] found id: ""
	I1206 09:07:26.187330  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.187338  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:26.187345  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:26.187389  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:26.229204  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:26.229229  222653 cri.go:89] found id: ""
	I1206 09:07:26.229240  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:26.229303  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:26.233743  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:26.233821  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:26.270792  222653 cri.go:89] found id: ""
	I1206 09:07:26.270821  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.270836  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:26.270844  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:26.270904  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:26.309623  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:26.309645  222653 cri.go:89] found id: ""
	I1206 09:07:26.309655  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:26.309710  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:26.313667  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:26.313734  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:26.351148  222653 cri.go:89] found id: ""
	I1206 09:07:26.351175  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.351185  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:26.351193  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:26.351247  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:26.389692  222653 cri.go:89] found id: ""
	I1206 09:07:26.389729  222653 logs.go:282] 0 containers: []
	W1206 09:07:26.389741  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:26.389754  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:26.389771  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:26.439423  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:26.439463  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:22.238199  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:22.238765  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:22.238818  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:22.238869  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:22.272767  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:22.272790  224160 cri.go:89] found id: ""
	I1206 09:07:22.272801  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:22.272857  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.277421  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:22.277480  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:22.304689  224160 cri.go:89] found id: ""
	I1206 09:07:22.304715  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.304724  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:22.304730  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:22.304790  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:22.332626  224160 cri.go:89] found id: ""
	I1206 09:07:22.332653  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.332664  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:22.332672  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:22.332725  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:22.363744  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:22.363767  224160 cri.go:89] found id: ""
	I1206 09:07:22.363777  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:22.363832  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.368679  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:22.368748  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:22.399667  224160 cri.go:89] found id: ""
	I1206 09:07:22.399695  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.399706  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:22.399713  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:22.399771  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:22.430379  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:22.430405  224160 cri.go:89] found id: ""
	I1206 09:07:22.430415  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:22.430478  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:22.434663  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:22.434725  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:22.462530  224160 cri.go:89] found id: ""
	I1206 09:07:22.462559  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.462571  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:22.462578  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:22.462642  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:22.493665  224160 cri.go:89] found id: ""
	I1206 09:07:22.493692  224160 logs.go:282] 0 containers: []
	W1206 09:07:22.493702  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:22.493713  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:22.493725  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:22.588888  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:22.588919  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:22.603368  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:22.603396  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:22.660150  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:22.660172  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:22.660187  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:22.691897  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:22.691936  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:22.719279  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:22.719302  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:22.748448  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:22.748476  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:22.807592  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:22.807627  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:25.344068  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:25.344559  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:25.344607  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:25.344653  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:25.376468  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:25.376494  224160 cri.go:89] found id: ""
	I1206 09:07:25.376505  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:25.376557  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:25.380651  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:25.380704  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:25.411711  224160 cri.go:89] found id: ""
	I1206 09:07:25.411736  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.411747  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:25.411755  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:25.411808  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:25.458026  224160 cri.go:89] found id: ""
	I1206 09:07:25.458057  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.458068  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:25.458077  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:25.458134  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:25.506732  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:25.506754  224160 cri.go:89] found id: ""
	I1206 09:07:25.506763  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:25.506816  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:25.513710  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:25.513837  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:25.554365  224160 cri.go:89] found id: ""
	I1206 09:07:25.554390  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.554408  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:25.554415  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:25.554470  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:25.596695  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:25.596742  224160 cri.go:89] found id: ""
	I1206 09:07:25.596752  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:25.596826  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:25.602118  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:25.602186  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:25.636831  224160 cri.go:89] found id: ""
	I1206 09:07:25.636898  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.636914  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:25.636922  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:25.637073  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:25.675266  224160 cri.go:89] found id: ""
	I1206 09:07:25.675290  224160 logs.go:282] 0 containers: []
	W1206 09:07:25.675300  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:25.675309  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:25.675322  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:25.712437  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:25.712463  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:25.802809  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:25.802846  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:25.817950  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:25.817975  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:25.885512  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:25.885537  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:25.885553  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:25.922034  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:25.922066  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:25.954182  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:25.954212  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:25.985910  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:25.985947  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:26.361903  255989 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.430497145s)
	I1206 09:07:26.361927  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1206 09:07:26.361948  255989 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:07:26.362068  255989 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:07:26.958660  255989 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-5617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 09:07:26.958698  255989 cache_images.go:125] Successfully loaded all cached images
	I1206 09:07:26.958705  255989 cache_images.go:94] duration metric: took 9.11307095s to LoadCachedImages
	I1206 09:07:26.958720  255989 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:07:26.958809  255989 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-769733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:07:26.958898  255989 ssh_runner.go:195] Run: crio config
	I1206 09:07:27.004545  255989 cni.go:84] Creating CNI manager for ""
	I1206 09:07:27.004566  255989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:27.004583  255989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:07:27.004602  255989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-769733 NodeName:no-preload-769733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:07:27.004761  255989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-769733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:07:27.004826  255989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:07:27.012998  255989 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1206 09:07:27.013055  255989 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:07:27.020869  255989 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1206 09:07:27.020923  255989 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1206 09:07:27.020965  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1206 09:07:27.020957  255989 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1206 09:07:27.025191  255989 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1206 09:07:27.025222  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1206 09:07:27.737595  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:07:27.751161  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1206 09:07:27.755029  255989 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1206 09:07:27.755060  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1206 09:07:27.847373  255989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1206 09:07:27.860184  255989 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1206 09:07:27.860245  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1206 09:07:28.086332  255989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:07:28.095015  255989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:07:28.107611  255989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:07:28.181625  255989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:07:28.195170  255989 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:07:28.199028  255989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:07:28.218317  255989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:28.301493  255989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:07:28.323234  255989 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733 for IP: 192.168.94.2
	I1206 09:07:28.323256  255989 certs.go:195] generating shared ca certs ...
	I1206 09:07:28.323278  255989 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.323446  255989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:07:28.323487  255989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:07:28.323497  255989 certs.go:257] generating profile certs ...
	I1206 09:07:28.323548  255989 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.key
	I1206 09:07:28.323561  255989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt with IP's: []
	I1206 09:07:28.439838  255989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt ...
	I1206 09:07:28.439864  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: {Name:mk51ce1a337b109238ea95988a6d82b04abffa87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.440048  255989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.key ...
	I1206 09:07:28.440063  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.key: {Name:mk549eb3bee0556ac6670ffc50072f5f60e88eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.440148  255989 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7
	I1206 09:07:28.440164  255989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:07:28.513593  255989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7 ...
	I1206 09:07:28.513628  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7: {Name:mkbd6a20e4f216916338facbe5f5c86a546ef2d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.513836  255989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7 ...
	I1206 09:07:28.513858  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7: {Name:mk38235e3e898831eee31ebf5b7782ea0c001e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.513962  255989 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt.54d70cf7 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt
	I1206 09:07:28.514099  255989 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key.54d70cf7 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key
	I1206 09:07:28.514180  255989 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key
	I1206 09:07:28.514203  255989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt with IP's: []
	I1206 09:07:28.576097  255989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt ...
	I1206 09:07:28.576120  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt: {Name:mka0e374df5d33e71d4cc208952fa17a2348f688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.576288  255989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key ...
	I1206 09:07:28.576304  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key: {Name:mk810daf6c924b5eb6053d90018cda8997f74e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:28.576534  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:07:28.576581  255989 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:07:28.576596  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:07:28.576632  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:07:28.576675  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:07:28.576713  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:07:28.576775  255989 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:07:28.577491  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:07:28.596884  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:07:28.616262  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:07:28.635527  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:07:28.653723  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:07:28.673896  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:07:28.693706  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:07:28.712568  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:07:28.731423  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:07:28.753516  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:07:28.772171  255989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:07:28.791901  255989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:07:28.806351  255989 ssh_runner.go:195] Run: openssl version
	I1206 09:07:28.812513  255989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.820357  255989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:07:28.827774  255989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.831658  255989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.831712  255989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:07:28.869520  255989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:28.878297  255989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:07:28.886554  255989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.894673  255989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:07:28.902154  255989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.905942  255989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.906001  255989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:07:28.949359  255989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:07:28.959758  255989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:07:28.970930  255989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:07:28.979436  255989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:07:28.987970  255989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:07:28.992224  255989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:07:28.992279  255989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:07:29.030890  255989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:07:29.040374  255989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:07:29.048917  255989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:07:29.053467  255989 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:07:29.053531  255989 kubeadm.go:401] StartCluster: {Name:no-preload-769733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-769733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:29.053620  255989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:07:29.053691  255989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:07:29.083934  255989 cri.go:89] found id: ""
	I1206 09:07:29.084033  255989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:07:29.092853  255989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:07:29.102213  255989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:07:29.102279  255989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:07:29.110683  255989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:07:29.110705  255989 kubeadm.go:158] found existing configuration files:
	
	I1206 09:07:29.110750  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:07:29.118482  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:07:29.118539  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:07:29.126578  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:07:29.135512  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:07:29.135577  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:07:29.144628  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:07:29.152275  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:07:29.152334  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:07:29.159452  255989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:07:29.166720  255989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:07:29.166764  255989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:07:29.173788  255989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:07:29.210630  255989 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:07:29.210708  255989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:07:29.278372  255989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:07:29.278489  255989 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:07:29.278547  255989 kubeadm.go:319] OS: Linux
	I1206 09:07:29.278622  255989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:07:29.278710  255989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:07:29.278771  255989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:07:29.278860  255989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:07:29.278936  255989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:07:29.279024  255989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:07:29.279089  255989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:07:29.279145  255989 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:07:29.337164  255989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:07:29.337326  255989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:07:29.337466  255989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:07:29.356240  255989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:07:26.408187  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:26.907752  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:27.407937  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:27.908341  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:28.407456  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:28.908145  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:29.408316  249953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:29.478685  249953 kubeadm.go:1114] duration metric: took 12.672959683s to wait for elevateKubeSystemPrivileges
	I1206 09:07:29.478722  249953 kubeadm.go:403] duration metric: took 23.249181397s to StartCluster
	I1206 09:07:29.478742  249953 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:29.478811  249953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:07:29.479779  249953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:29.480059  249953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:07:29.480060  249953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:07:29.480151  249953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:07:29.480265  249953 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-322324"
	I1206 09:07:29.480289  249953 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-322324"
	I1206 09:07:29.480301  249953 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:07:29.480320  249953 host.go:66] Checking if "old-k8s-version-322324" exists ...
	I1206 09:07:29.480322  249953 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-322324"
	I1206 09:07:29.480370  249953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-322324"
	I1206 09:07:29.480696  249953 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:07:29.480827  249953 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:07:29.481862  249953 out.go:179] * Verifying Kubernetes components...
	I1206 09:07:29.483374  249953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:29.508389  249953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:29.509016  249953 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-322324"
	I1206 09:07:29.509060  249953 host.go:66] Checking if "old-k8s-version-322324" exists ...
	I1206 09:07:29.509533  249953 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:07:29.509548  249953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:29.509565  249953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:07:29.509613  249953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-322324
	I1206 09:07:29.539626  249953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:29.539726  249953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:07:29.539812  249953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-322324
	I1206 09:07:29.543514  249953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/old-k8s-version-322324/id_rsa Username:docker}
	I1206 09:07:29.566274  249953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/old-k8s-version-322324/id_rsa Username:docker}
	I1206 09:07:29.591608  249953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:07:29.632145  249953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:07:29.658455  249953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:29.680360  249953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:29.842612  249953 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:07:29.843684  249953 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-322324" to be "Ready" ...
	I1206 09:07:30.144494  249953 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:07:29.358225  255989 out.go:252]   - Generating certificates and keys ...
	I1206 09:07:29.358352  255989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:07:29.358479  255989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:07:29.426850  255989 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:07:29.610107  255989 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:07:29.669469  255989 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:07:29.723858  255989 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:07:29.770042  255989 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:07:29.772393  255989 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-769733] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:07:29.924724  255989 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:07:29.925069  255989 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-769733] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:07:30.010258  255989 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:07:30.044426  255989 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:07:30.110551  255989 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:07:30.110856  255989 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:07:30.242376  255989 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:07:30.504759  255989 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:07:30.656935  255989 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:07:30.787172  255989 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:07:30.865647  255989 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:07:30.866371  255989 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:07:30.872632  255989 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:07:30.145558  249953 addons.go:530] duration metric: took 665.402168ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:07:30.347940  249953 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-322324" context rescaled to 1 replicas
	I1206 09:07:26.513039  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:26.513069  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:26.548609  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:26.548633  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:26.595523  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:26.595555  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:26.634833  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:26.634868  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:26.726954  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:26.726996  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:26.744662  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:26.744692  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:26.811253  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:29.312056  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:28.547118  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:28.547489  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:28.547545  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:28.547601  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:28.574655  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:28.574675  224160 cri.go:89] found id: ""
	I1206 09:07:28.574682  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:28.574729  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:28.578748  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:28.578813  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:28.606204  224160 cri.go:89] found id: ""
	I1206 09:07:28.606229  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.606240  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:28.606248  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:28.606300  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:28.633905  224160 cri.go:89] found id: ""
	I1206 09:07:28.633935  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.633945  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:28.633959  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:28.634030  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:28.661910  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:28.661932  224160 cri.go:89] found id: ""
	I1206 09:07:28.661941  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:28.662028  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:28.666516  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:28.666575  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:28.693859  224160 cri.go:89] found id: ""
	I1206 09:07:28.693886  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.693899  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:28.693907  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:28.693966  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:28.721458  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:28.721481  224160 cri.go:89] found id: ""
	I1206 09:07:28.721497  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:28.721560  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:28.725272  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:28.725350  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:28.752770  224160 cri.go:89] found id: ""
	I1206 09:07:28.752799  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.752809  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:28.752816  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:28.752875  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:28.780329  224160 cri.go:89] found id: ""
	I1206 09:07:28.780355  224160 logs.go:282] 0 containers: []
	W1206 09:07:28.780366  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:28.780377  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:28.780429  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:28.838478  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:28.838504  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:28.869185  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:28.869214  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:28.962944  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:28.962983  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:28.979527  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:28.979551  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:29.039801  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:29.039820  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:29.039831  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:29.073859  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:29.073887  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:29.104962  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:29.105013  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:31.634755  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:31.635190  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:31.635240  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:31.635288  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:31.664881  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:31.664907  224160 cri.go:89] found id: ""
	I1206 09:07:31.664917  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:31.664975  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:31.669962  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:31.670043  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:31.700237  224160 cri.go:89] found id: ""
	I1206 09:07:31.700260  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.700271  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:31.700278  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:31.700344  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:31.734931  224160 cri.go:89] found id: ""
	I1206 09:07:31.734958  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.734968  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:31.734976  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:31.735050  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:31.768414  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:31.768439  224160 cri.go:89] found id: ""
	I1206 09:07:31.768448  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:31.768507  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:31.774023  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:31.774102  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:31.808546  224160 cri.go:89] found id: ""
	I1206 09:07:31.808576  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.808589  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:31.808597  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:31.808661  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:31.840967  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:31.841316  224160 cri.go:89] found id: ""
	I1206 09:07:31.841342  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:31.841415  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:31.846757  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:31.846821  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:31.877072  224160 cri.go:89] found id: ""
	I1206 09:07:31.877099  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.877110  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:31.877118  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:31.877175  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:31.905962  224160 cri.go:89] found id: ""
	I1206 09:07:31.906014  224160 logs.go:282] 0 containers: []
	W1206 09:07:31.906027  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:31.906038  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:31.906069  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:31.971232  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:31.971256  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:31.971273  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:32.004963  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:32.005026  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:32.033161  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:32.033188  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:32.060503  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:32.060529  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:32.113812  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:32.113850  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:32.144542  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:32.144571  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:30.874358  255989 out.go:252]   - Booting up control plane ...
	I1206 09:07:30.874487  255989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:07:30.874606  255989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:07:30.875686  255989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:07:30.893709  255989 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:07:30.893889  255989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:07:30.900923  255989 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:07:30.901253  255989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:07:30.901335  255989 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:07:31.012330  255989 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:07:31.012462  255989 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:07:31.514149  255989 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.87682ms
	I1206 09:07:31.517373  255989 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:07:31.517504  255989 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:07:31.517633  255989 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:07:31.517727  255989 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:07:32.022121  255989 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.568081ms
	I1206 09:07:33.390162  255989 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.872670055s
	I1206 09:07:35.520478  255989 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.003048064s
	I1206 09:07:35.537470  255989 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:07:35.548264  255989 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:07:35.556402  255989 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:07:35.556719  255989 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-769733 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:07:35.564904  255989 kubeadm.go:319] [bootstrap-token] Using token: 595w8g.4ay26dwior6u2ehq
	I1206 09:07:35.566977  255989 out.go:252]   - Configuring RBAC rules ...
	I1206 09:07:35.567130  255989 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:07:35.570103  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:07:35.575169  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:07:35.577548  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:07:35.579866  255989 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:07:35.582193  255989 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	W1206 09:07:31.849195  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	W1206 09:07:34.347852  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	I1206 09:07:34.314394  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 09:07:34.314486  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:34.314552  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:34.354980  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:34.355028  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:34.355034  222653 cri.go:89] found id: ""
	I1206 09:07:34.355043  222653 logs.go:282] 2 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:34.355093  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.359478  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.363487  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:34.363554  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:34.399236  222653 cri.go:89] found id: ""
	I1206 09:07:34.399263  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.399272  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:34.399278  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:34.399323  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:34.435457  222653 cri.go:89] found id: ""
	I1206 09:07:34.435478  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.435484  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:34.435489  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:34.435543  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:34.473941  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:34.473967  222653 cri.go:89] found id: ""
	I1206 09:07:34.473978  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:34.474044  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.478215  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:34.478286  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:34.514284  222653 cri.go:89] found id: ""
	I1206 09:07:34.514307  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.514314  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:34.514319  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:34.514384  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:34.551124  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:34.551148  222653 cri.go:89] found id: ""
	I1206 09:07:34.551157  222653 logs.go:282] 1 containers: [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:34.551212  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.555723  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:34.555796  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:34.592494  222653 cri.go:89] found id: ""
	I1206 09:07:34.592522  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.592532  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:34.592539  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:34.592585  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:34.633451  222653 cri.go:89] found id: ""
	I1206 09:07:34.633475  222653 logs.go:282] 0 containers: []
	W1206 09:07:34.633486  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:34.633504  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:34.633518  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 09:07:35.927065  255989 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:07:36.340971  255989 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:07:36.926458  255989 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:07:36.927542  255989 kubeadm.go:319] 
	I1206 09:07:36.927624  255989 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:07:36.927635  255989 kubeadm.go:319] 
	I1206 09:07:36.927728  255989 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:07:36.927737  255989 kubeadm.go:319] 
	I1206 09:07:36.927780  255989 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:07:36.927843  255989 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:07:36.927889  255989 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:07:36.927895  255989 kubeadm.go:319] 
	I1206 09:07:36.927983  255989 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:07:36.928020  255989 kubeadm.go:319] 
	I1206 09:07:36.928103  255989 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:07:36.928112  255989 kubeadm.go:319] 
	I1206 09:07:36.928181  255989 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:07:36.928271  255989 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:07:36.928390  255989 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:07:36.928409  255989 kubeadm.go:319] 
	I1206 09:07:36.928532  255989 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:07:36.928643  255989 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:07:36.928652  255989 kubeadm.go:319] 
	I1206 09:07:36.928789  255989 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 595w8g.4ay26dwior6u2ehq \
	I1206 09:07:36.928953  255989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:07:36.929010  255989 kubeadm.go:319] 	--control-plane 
	I1206 09:07:36.929019  255989 kubeadm.go:319] 
	I1206 09:07:36.929155  255989 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:07:36.929168  255989 kubeadm.go:319] 
	I1206 09:07:36.929290  255989 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 595w8g.4ay26dwior6u2ehq \
	I1206 09:07:36.929446  255989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:07:36.931415  255989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:07:36.931566  255989 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:07:36.931598  255989 cni.go:84] Creating CNI manager for ""
	I1206 09:07:36.931611  255989 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:07:36.935641  255989 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:07:32.232881  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:32.232919  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:34.749601  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:34.750065  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:07:34.750121  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:34.750180  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:34.779404  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:34.779422  224160 cri.go:89] found id: ""
	I1206 09:07:34.779433  224160 logs.go:282] 1 containers: [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:34.779478  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.783840  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:34.783899  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:34.811525  224160 cri.go:89] found id: ""
	I1206 09:07:34.811555  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.811565  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:34.811574  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:34.811649  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:34.840881  224160 cri.go:89] found id: ""
	I1206 09:07:34.840919  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.840931  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:34.840940  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:34.841035  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:34.868271  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:34.868290  224160 cri.go:89] found id: ""
	I1206 09:07:34.868300  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:34.868354  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.872625  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:34.872683  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:34.901139  224160 cri.go:89] found id: ""
	I1206 09:07:34.901166  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.901175  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:34.901180  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:34.901226  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:34.927730  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:34.927748  224160 cri.go:89] found id: ""
	I1206 09:07:34.927755  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:34.927827  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:34.932708  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:34.932779  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:34.966267  224160 cri.go:89] found id: ""
	I1206 09:07:34.966296  224160 logs.go:282] 0 containers: []
	W1206 09:07:34.966306  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:34.966313  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:34.966372  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:35.002643  224160 cri.go:89] found id: ""
	I1206 09:07:35.002672  224160 logs.go:282] 0 containers: []
	W1206 09:07:35.002683  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:35.002694  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:35.002708  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:35.038650  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:35.038682  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:35.138295  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:35.138333  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:35.155748  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:35.155780  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:35.221461  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:35.221481  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:35.221496  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:35.258011  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:35.258044  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:35.290805  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:35.290850  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:35.322926  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:35.322950  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:36.936891  255989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:07:36.941636  255989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1206 09:07:36.941658  255989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:07:36.956343  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:07:37.182932  255989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:07:37.183022  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:37.183052  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-769733 minikube.k8s.io/updated_at=2025_12_06T09_07_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=no-preload-769733 minikube.k8s.io/primary=true
	I1206 09:07:37.195768  255989 ops.go:34] apiserver oom_adj: -16
	I1206 09:07:37.274765  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:37.774920  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:38.275113  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:38.775534  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:39.275186  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:39.775711  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:40.275114  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1206 09:07:36.846455  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	W1206 09:07:38.846888  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	I1206 09:07:40.775293  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:41.275747  255989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:07:41.346609  255989 kubeadm.go:1114] duration metric: took 4.16367274s to wait for elevateKubeSystemPrivileges
	I1206 09:07:41.346645  255989 kubeadm.go:403] duration metric: took 12.29311805s to StartCluster
	I1206 09:07:41.346667  255989 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:41.346753  255989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:07:41.348124  255989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:07:41.348337  255989 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:07:41.348365  255989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:07:41.348426  255989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:07:41.348506  255989 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.348525  255989 addons.go:70] Setting storage-provisioner=true in profile "no-preload-769733"
	I1206 09:07:41.348548  255989 addons.go:239] Setting addon storage-provisioner=true in "no-preload-769733"
	I1206 09:07:41.348560  255989 addons.go:70] Setting default-storageclass=true in profile "no-preload-769733"
	I1206 09:07:41.348582  255989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-769733"
	I1206 09:07:41.348585  255989 host.go:66] Checking if "no-preload-769733" exists ...
	I1206 09:07:41.348920  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:41.349080  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:41.351151  255989 out.go:179] * Verifying Kubernetes components...
	I1206 09:07:41.352247  255989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:07:41.371018  255989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:07:41.371704  255989 addons.go:239] Setting addon default-storageclass=true in "no-preload-769733"
	I1206 09:07:41.371739  255989 host.go:66] Checking if "no-preload-769733" exists ...
	I1206 09:07:41.372206  255989 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:07:41.372293  255989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:41.372315  255989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:07:41.372368  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:41.399834  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:41.401887  255989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:41.401909  255989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:07:41.401960  255989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:07:41.429582  255989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:07:41.446281  255989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:07:41.511869  255989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:07:41.535642  255989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:07:41.545648  255989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:07:41.631627  255989 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:07:41.632631  255989 node_ready.go:35] waiting up to 6m0s for node "no-preload-769733" to be "Ready" ...
	I1206 09:07:41.835464  255989 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:07:37.885172  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1206 09:07:41.346980  249953 node_ready.go:57] node "old-k8s-version-322324" has "Ready":"False" status (will retry)
	I1206 09:07:41.846494  249953 node_ready.go:49] node "old-k8s-version-322324" is "Ready"
	I1206 09:07:41.846523  249953 node_ready.go:38] duration metric: took 12.002814275s for node "old-k8s-version-322324" to be "Ready" ...
	I1206 09:07:41.846539  249953 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:07:41.846591  249953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:07:41.858774  249953 api_server.go:72] duration metric: took 12.378677713s to wait for apiserver process to appear ...
	I1206 09:07:41.858802  249953 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:07:41.858830  249953 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:07:41.863370  249953 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:07:41.864536  249953 api_server.go:141] control plane version: v1.28.0
	I1206 09:07:41.864565  249953 api_server.go:131] duration metric: took 5.75587ms to wait for apiserver health ...
	I1206 09:07:41.864576  249953 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:07:41.868797  249953 system_pods.go:59] 8 kube-system pods found
	I1206 09:07:41.868827  249953 system_pods.go:61] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:41.868832  249953 system_pods.go:61] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:41.868837  249953 system_pods.go:61] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:41.868841  249953 system_pods.go:61] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:41.868845  249953 system_pods.go:61] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:41.868848  249953 system_pods.go:61] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:41.868851  249953 system_pods.go:61] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:41.868856  249953 system_pods.go:61] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:41.868865  249953 system_pods.go:74] duration metric: took 4.282928ms to wait for pod list to return data ...
	I1206 09:07:41.868874  249953 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:07:41.870888  249953 default_sa.go:45] found service account: "default"
	I1206 09:07:41.870908  249953 default_sa.go:55] duration metric: took 2.026608ms for default service account to be created ...
	I1206 09:07:41.870915  249953 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:07:41.874429  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:41.874460  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:41.874468  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:41.874485  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:41.874494  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:41.874505  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:41.874514  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:41.874519  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:41.874529  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:41.874561  249953 retry.go:31] will retry after 192.117588ms: missing components: kube-dns
	I1206 09:07:42.073303  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:42.073353  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:42.073360  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:42.073368  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:42.073373  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:42.073378  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:42.073383  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:42.073395  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:42.073402  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:42.073418  249953 retry.go:31] will retry after 306.512117ms: missing components: kube-dns
	I1206 09:07:42.389397  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:42.389435  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:42.389451  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:42.389473  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:42.389484  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:42.389493  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:42.389502  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:42.389513  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:42.389536  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:42.389562  249953 retry.go:31] will retry after 418.251259ms: missing components: kube-dns
	I1206 09:07:42.812921  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:42.812954  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:42.812960  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:42.812965  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:42.812969  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:42.812974  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:42.812977  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:42.812980  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:42.812998  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:42.813017  249953 retry.go:31] will retry after 373.953455ms: missing components: kube-dns
	I1206 09:07:43.191920  249953 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:43.191957  249953 system_pods.go:89] "coredns-5dd5756b68-gf4kq" [349bf4f7-a7c8-45cb-a55f-cfad0698bfac] Running
	I1206 09:07:43.191965  249953 system_pods.go:89] "etcd-old-k8s-version-322324" [e37079bd-38fd-48a7-89f1-9f815d33ca6e] Running
	I1206 09:07:43.191971  249953 system_pods.go:89] "kindnet-fn4nn" [b3999369-84b8-4a7f-b999-5305a89ad2ef] Running
	I1206 09:07:43.191976  249953 system_pods.go:89] "kube-apiserver-old-k8s-version-322324" [35a4cc0b-e47e-45de-a3ca-28f427a145a9] Running
	I1206 09:07:43.191984  249953 system_pods.go:89] "kube-controller-manager-old-k8s-version-322324" [0b769784-9b65-4280-bde5-9065f93556e5] Running
	I1206 09:07:43.192021  249953 system_pods.go:89] "kube-proxy-pspsz" [6e52eb74-1c28-4573-b5be-93a2b28646f5] Running
	I1206 09:07:43.192030  249953 system_pods.go:89] "kube-scheduler-old-k8s-version-322324" [75756f20-d675-4e26-8dff-f241843c4d0a] Running
	I1206 09:07:43.192036  249953 system_pods.go:89] "storage-provisioner" [e6100832-c99a-456e-b2d0-359f940bfa8a] Running
	I1206 09:07:43.192045  249953 system_pods.go:126] duration metric: took 1.321124826s to wait for k8s-apps to be running ...
	I1206 09:07:43.192057  249953 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:07:43.192114  249953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:07:43.207751  249953 system_svc.go:56] duration metric: took 15.683735ms WaitForService to wait for kubelet
	I1206 09:07:43.207780  249953 kubeadm.go:587] duration metric: took 13.727689751s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:07:43.207800  249953 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:07:43.210927  249953 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:07:43.210959  249953 node_conditions.go:123] node cpu capacity is 8
	I1206 09:07:43.210979  249953 node_conditions.go:105] duration metric: took 3.172435ms to run NodePressure ...
	I1206 09:07:43.211017  249953 start.go:242] waiting for startup goroutines ...
	I1206 09:07:43.211032  249953 start.go:247] waiting for cluster config update ...
	I1206 09:07:43.211046  249953 start.go:256] writing updated cluster config ...
	I1206 09:07:43.211352  249953 ssh_runner.go:195] Run: rm -f paused
	I1206 09:07:43.215613  249953 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:07:43.220387  249953 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gf4kq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.225494  249953 pod_ready.go:94] pod "coredns-5dd5756b68-gf4kq" is "Ready"
	I1206 09:07:43.225517  249953 pod_ready.go:86] duration metric: took 5.101903ms for pod "coredns-5dd5756b68-gf4kq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.228616  249953 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.233130  249953 pod_ready.go:94] pod "etcd-old-k8s-version-322324" is "Ready"
	I1206 09:07:43.233156  249953 pod_ready.go:86] duration metric: took 4.515037ms for pod "etcd-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.236328  249953 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.240615  249953 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-322324" is "Ready"
	I1206 09:07:43.240638  249953 pod_ready.go:86] duration metric: took 4.285769ms for pod "kube-apiserver-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.243145  249953 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.619890  249953 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-322324" is "Ready"
	I1206 09:07:43.619916  249953 pod_ready.go:86] duration metric: took 376.751902ms for pod "kube-controller-manager-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:43.820969  249953 pod_ready.go:83] waiting for pod "kube-proxy-pspsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.220198  249953 pod_ready.go:94] pod "kube-proxy-pspsz" is "Ready"
	I1206 09:07:44.220227  249953 pod_ready.go:86] duration metric: took 399.219428ms for pod "kube-proxy-pspsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.420863  249953 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.820319  249953 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-322324" is "Ready"
	I1206 09:07:44.820354  249953 pod_ready.go:86] duration metric: took 399.451148ms for pod "kube-scheduler-old-k8s-version-322324" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:44.820370  249953 pod_ready.go:40] duration metric: took 1.604725918s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:07:44.866194  249953 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1206 09:07:44.867969  249953 out.go:203] 
	W1206 09:07:44.869218  249953 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1206 09:07:44.870240  249953 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1206 09:07:44.871529  249953 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-322324" cluster and "default" namespace by default
	I1206 09:07:41.836687  255989 addons.go:530] duration metric: took 488.265725ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:07:42.135862  255989 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-769733" context rescaled to 1 replicas
	W1206 09:07:43.635662  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	I1206 09:07:44.704568  222653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.071024996s)
	W1206 09:07:44.704617  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 09:07:44.704633  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:44.704647  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:44.743422  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:44.743457  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:44.813286  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:44.813317  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:44.911624  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:44.911658  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:44.929150  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:44.929186  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:44.971310  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:44.971337  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:45.007885  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:45.007910  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:45.063548  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:45.063587  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:42.886157  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 09:07:42.886232  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:42.886296  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:42.916968  224160 cri.go:89] found id: "4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:07:42.917021  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:42.917028  224160 cri.go:89] found id: ""
	I1206 09:07:42.917036  224160 logs.go:282] 2 containers: [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:42.917183  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:42.921948  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:42.926436  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:42.926500  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:42.960276  224160 cri.go:89] found id: ""
	I1206 09:07:42.960306  224160 logs.go:282] 0 containers: []
	W1206 09:07:42.960317  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:42.960329  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:42.960391  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:43.003349  224160 cri.go:89] found id: ""
	I1206 09:07:43.003378  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.003388  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:43.003395  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:43.003467  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:43.036071  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:43.036095  224160 cri.go:89] found id: ""
	I1206 09:07:43.036106  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:43.036169  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:43.040573  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:43.040643  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:43.072172  224160 cri.go:89] found id: ""
	I1206 09:07:43.072200  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.072210  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:43.072217  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:43.072275  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:43.105694  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:43.105716  224160 cri.go:89] found id: ""
	I1206 09:07:43.105727  224160 logs.go:282] 1 containers: [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:43.105786  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:43.110341  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:43.110394  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:43.139980  224160 cri.go:89] found id: ""
	I1206 09:07:43.140020  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.140031  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:43.140038  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:43.140098  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:43.168849  224160 cri.go:89] found id: ""
	I1206 09:07:43.168876  224160 logs.go:282] 0 containers: []
	W1206 09:07:43.168887  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:43.168905  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:43.168920  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:43.266073  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:43.266105  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:46.135558  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	W1206 09:07:48.135635  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	W1206 09:07:50.136042  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	I1206 09:07:47.604853  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:47.605374  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:47.605427  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:47.605488  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:47.640247  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:47.640270  222653 cri.go:89] found id: "dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:47.640276  222653 cri.go:89] found id: ""
	I1206 09:07:47.640285  222653 logs.go:282] 2 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4]
	I1206 09:07:47.640343  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.644294  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.647782  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:47.647853  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:47.682228  222653 cri.go:89] found id: ""
	I1206 09:07:47.682249  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.682255  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:47.682263  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:47.682306  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:47.716449  222653 cri.go:89] found id: ""
	I1206 09:07:47.716473  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.716482  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:47.716489  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:47.716548  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:47.751665  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:47.751688  222653 cri.go:89] found id: ""
	I1206 09:07:47.751696  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:47.751743  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.755458  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:47.755509  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:47.790335  222653 cri.go:89] found id: ""
	I1206 09:07:47.790359  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.790367  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:47.790373  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:47.790422  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:47.824861  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:47.824883  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:47.824886  222653 cri.go:89] found id: ""
	I1206 09:07:47.824893  222653 logs.go:282] 2 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:47.824936  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.828796  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:47.832275  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:47.832322  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:47.867447  222653 cri.go:89] found id: ""
	I1206 09:07:47.867468  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.867475  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:47.867481  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:47.867557  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:47.904140  222653 cri.go:89] found id: ""
	I1206 09:07:47.904167  222653 logs.go:282] 0 containers: []
	W1206 09:07:47.904177  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:47.904197  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:47.904211  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:47.920401  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:47.920426  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:47.988341  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:07:47.988373  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:48.023893  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:48.023916  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:48.074221  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:48.074252  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:48.169186  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:48.169215  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:48.229200  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:48.229219  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:48.229232  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:48.268047  222653 logs.go:123] Gathering logs for kube-apiserver [dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4] ...
	I1206 09:07:48.268078  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadde436bbd59877fbca80f96867d2bd87f7eda2f9125800be881906bb02e9b4"
	I1206 09:07:48.306185  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:48.306215  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:48.340880  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:48.340905  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:50.881094  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:50.881493  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:50.881540  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:50.881588  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:50.916366  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:50.916386  222653 cri.go:89] found id: ""
	I1206 09:07:50.916393  222653 logs.go:282] 1 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400]
	I1206 09:07:50.916452  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:50.920255  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:50.920318  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:50.954222  222653 cri.go:89] found id: ""
	I1206 09:07:50.954242  222653 logs.go:282] 0 containers: []
	W1206 09:07:50.954255  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:50.954261  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:50.954313  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:50.989922  222653 cri.go:89] found id: ""
	I1206 09:07:50.989950  222653 logs.go:282] 0 containers: []
	W1206 09:07:50.989957  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:50.989979  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:50.990052  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:51.024154  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:51.024174  222653 cri.go:89] found id: ""
	I1206 09:07:51.024183  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:51.024239  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:51.027928  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:51.027983  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:51.064518  222653 cri.go:89] found id: ""
	I1206 09:07:51.064551  222653 logs.go:282] 0 containers: []
	W1206 09:07:51.064563  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:51.064572  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:51.064630  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:51.099738  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:51.099761  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:51.099767  222653 cri.go:89] found id: ""
	I1206 09:07:51.099776  222653 logs.go:282] 2 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:51.099828  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:51.103758  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:51.107314  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:51.107379  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:51.142058  222653 cri.go:89] found id: ""
	I1206 09:07:51.142082  222653 logs.go:282] 0 containers: []
	W1206 09:07:51.142092  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:51.142100  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:51.142159  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:51.176980  222653 cri.go:89] found id: ""
	I1206 09:07:51.177051  222653 logs.go:282] 0 containers: []
	W1206 09:07:51.177059  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:51.177073  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:51.177088  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:51.235708  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:51.235726  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:51.235742  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:51.305544  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:51.305573  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:51.340354  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:51.340390  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:51.377578  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:51.377603  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:51.414929  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:07:51.414953  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:51.449327  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:51.449352  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1206 09:07:52.136483  255989 node_ready.go:57] node "no-preload-769733" has "Ready":"False" status (will retry)
	I1206 09:07:54.137261  255989 node_ready.go:49] node "no-preload-769733" is "Ready"
	I1206 09:07:54.137294  255989 node_ready.go:38] duration metric: took 12.504640022s for node "no-preload-769733" to be "Ready" ...
	I1206 09:07:54.137312  255989 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:07:54.137367  255989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:07:54.156503  255989 api_server.go:72] duration metric: took 12.80812862s to wait for apiserver process to appear ...
	I1206 09:07:54.156526  255989 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:07:54.156548  255989 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:07:54.162771  255989 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:07:54.164071  255989 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:07:54.164093  255989 api_server.go:131] duration metric: took 7.560743ms to wait for apiserver health ...
	I1206 09:07:54.164102  255989 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:07:54.167970  255989 system_pods.go:59] 8 kube-system pods found
	I1206 09:07:54.168031  255989 system_pods.go:61] "coredns-7d764666f9-jllj2" [60e2b794-62a6-4c6e-b48b-4d95862a11d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:54.168041  255989 system_pods.go:61] "etcd-no-preload-769733" [6c79ece9-a953-4216-ab23-8b5d7848170b] Running
	I1206 09:07:54.168049  255989 system_pods.go:61] "kindnet-7m8h6" [858f0c55-78ac-45ce-9824-f34d3de8cdc6] Running
	I1206 09:07:54.168059  255989 system_pods.go:61] "kube-apiserver-no-preload-769733" [d9c93321-d4d2-4e34-80a7-f713945c8c2b] Running
	I1206 09:07:54.168070  255989 system_pods.go:61] "kube-controller-manager-no-preload-769733" [0219b9f6-edd3-4433-901b-1ee20faf491f] Running
	I1206 09:07:54.168075  255989 system_pods.go:61] "kube-proxy-5jsq2" [ba99eecc-e4e4-4861-9a7c-ab51b62684bf] Running
	I1206 09:07:54.168084  255989 system_pods.go:61] "kube-scheduler-no-preload-769733" [0f0afff4-ba2f-4b24-8357-481a88c853b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:07:54.168100  255989 system_pods.go:61] "storage-provisioner" [0a6b38d2-2d16-482f-a0eb-c386f48ac1ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:54.168112  255989 system_pods.go:74] duration metric: took 4.003445ms to wait for pod list to return data ...
	I1206 09:07:54.168125  255989 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:07:54.170321  255989 default_sa.go:45] found service account: "default"
	I1206 09:07:54.170357  255989 default_sa.go:55] duration metric: took 2.221895ms for default service account to be created ...
	I1206 09:07:54.170368  255989 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:07:54.173346  255989 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:54.173379  255989 system_pods.go:89] "coredns-7d764666f9-jllj2" [60e2b794-62a6-4c6e-b48b-4d95862a11d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:07:54.173386  255989 system_pods.go:89] "etcd-no-preload-769733" [6c79ece9-a953-4216-ab23-8b5d7848170b] Running
	I1206 09:07:54.173394  255989 system_pods.go:89] "kindnet-7m8h6" [858f0c55-78ac-45ce-9824-f34d3de8cdc6] Running
	I1206 09:07:54.173399  255989 system_pods.go:89] "kube-apiserver-no-preload-769733" [d9c93321-d4d2-4e34-80a7-f713945c8c2b] Running
	I1206 09:07:54.173404  255989 system_pods.go:89] "kube-controller-manager-no-preload-769733" [0219b9f6-edd3-4433-901b-1ee20faf491f] Running
	I1206 09:07:54.173416  255989 system_pods.go:89] "kube-proxy-5jsq2" [ba99eecc-e4e4-4861-9a7c-ab51b62684bf] Running
	I1206 09:07:54.173424  255989 system_pods.go:89] "kube-scheduler-no-preload-769733" [0f0afff4-ba2f-4b24-8357-481a88c853b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:07:54.173430  255989 system_pods.go:89] "storage-provisioner" [0a6b38d2-2d16-482f-a0eb-c386f48ac1ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:07:54.173469  255989 retry.go:31] will retry after 294.684532ms: missing components: kube-dns
	I1206 09:07:54.473414  255989 system_pods.go:86] 8 kube-system pods found
	I1206 09:07:54.473449  255989 system_pods.go:89] "coredns-7d764666f9-jllj2" [60e2b794-62a6-4c6e-b48b-4d95862a11d4] Running
	I1206 09:07:54.473456  255989 system_pods.go:89] "etcd-no-preload-769733" [6c79ece9-a953-4216-ab23-8b5d7848170b] Running
	I1206 09:07:54.473463  255989 system_pods.go:89] "kindnet-7m8h6" [858f0c55-78ac-45ce-9824-f34d3de8cdc6] Running
	I1206 09:07:54.473468  255989 system_pods.go:89] "kube-apiserver-no-preload-769733" [d9c93321-d4d2-4e34-80a7-f713945c8c2b] Running
	I1206 09:07:54.473475  255989 system_pods.go:89] "kube-controller-manager-no-preload-769733" [0219b9f6-edd3-4433-901b-1ee20faf491f] Running
	I1206 09:07:54.473480  255989 system_pods.go:89] "kube-proxy-5jsq2" [ba99eecc-e4e4-4861-9a7c-ab51b62684bf] Running
	I1206 09:07:54.473494  255989 system_pods.go:89] "kube-scheduler-no-preload-769733" [0f0afff4-ba2f-4b24-8357-481a88c853b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:07:54.473500  255989 system_pods.go:89] "storage-provisioner" [0a6b38d2-2d16-482f-a0eb-c386f48ac1ca] Running
	I1206 09:07:54.473511  255989 system_pods.go:126] duration metric: took 303.130379ms to wait for k8s-apps to be running ...
	I1206 09:07:54.473521  255989 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:07:54.473573  255989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:07:54.489750  255989 system_svc.go:56] duration metric: took 16.220325ms WaitForService to wait for kubelet
	I1206 09:07:54.489779  255989 kubeadm.go:587] duration metric: took 13.141408614s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:07:54.489801  255989 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:07:54.492803  255989 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:07:54.492831  255989 node_conditions.go:123] node cpu capacity is 8
	I1206 09:07:54.492849  255989 node_conditions.go:105] duration metric: took 3.042508ms to run NodePressure ...
	I1206 09:07:54.492867  255989 start.go:242] waiting for startup goroutines ...
	I1206 09:07:54.492880  255989 start.go:247] waiting for cluster config update ...
	I1206 09:07:54.492898  255989 start.go:256] writing updated cluster config ...
	I1206 09:07:54.493185  255989 ssh_runner.go:195] Run: rm -f paused
	I1206 09:07:54.497581  255989 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:07:54.501112  255989 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jllj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.505657  255989 pod_ready.go:94] pod "coredns-7d764666f9-jllj2" is "Ready"
	I1206 09:07:54.505684  255989 pod_ready.go:86] duration metric: took 4.551323ms for pod "coredns-7d764666f9-jllj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.507828  255989 pod_ready.go:83] waiting for pod "etcd-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.512087  255989 pod_ready.go:94] pod "etcd-no-preload-769733" is "Ready"
	I1206 09:07:54.512112  255989 pod_ready.go:86] duration metric: took 4.262252ms for pod "etcd-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.513937  255989 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.517900  255989 pod_ready.go:94] pod "kube-apiserver-no-preload-769733" is "Ready"
	I1206 09:07:54.517920  255989 pod_ready.go:86] duration metric: took 3.962641ms for pod "kube-apiserver-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.519754  255989 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:54.901710  255989 pod_ready.go:94] pod "kube-controller-manager-no-preload-769733" is "Ready"
	I1206 09:07:54.901736  255989 pod_ready.go:86] duration metric: took 381.960224ms for pod "kube-controller-manager-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:55.101825  255989 pod_ready.go:83] waiting for pod "kube-proxy-5jsq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:55.502273  255989 pod_ready.go:94] pod "kube-proxy-5jsq2" is "Ready"
	I1206 09:07:55.502295  255989 pod_ready.go:86] duration metric: took 400.449421ms for pod "kube-proxy-5jsq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:55.703101  255989 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:51.497197  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:51.497229  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:51.589546  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:51.589584  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:54.107058  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:54.107485  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:54.107540  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:54.107598  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:54.156415  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:54.156452  222653 cri.go:89] found id: ""
	I1206 09:07:54.156462  222653 logs.go:282] 1 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400]
	I1206 09:07:54.156544  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:54.161642  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:54.161710  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:54.199535  222653 cri.go:89] found id: ""
	I1206 09:07:54.199564  222653 logs.go:282] 0 containers: []
	W1206 09:07:54.199573  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:54.199581  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:54.199647  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:54.249552  222653 cri.go:89] found id: ""
	I1206 09:07:54.249591  222653 logs.go:282] 0 containers: []
	W1206 09:07:54.249602  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:54.249610  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:54.249677  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:54.287643  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:54.287665  222653 cri.go:89] found id: ""
	I1206 09:07:54.287676  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:54.287721  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:54.291704  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:54.291760  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:54.328875  222653 cri.go:89] found id: ""
	I1206 09:07:54.328895  222653 logs.go:282] 0 containers: []
	W1206 09:07:54.328901  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:54.328907  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:54.328959  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:54.367247  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:54.367267  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:54.367272  222653 cri.go:89] found id: ""
	I1206 09:07:54.367279  222653 logs.go:282] 2 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:54.367320  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:54.371065  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:54.375300  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:54.375351  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:54.413123  222653 cri.go:89] found id: ""
	I1206 09:07:54.413145  222653 logs.go:282] 0 containers: []
	W1206 09:07:54.413153  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:54.413159  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:54.413215  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:54.449725  222653 cri.go:89] found id: ""
	I1206 09:07:54.449751  222653 logs.go:282] 0 containers: []
	W1206 09:07:54.449761  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:54.449778  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:54.449794  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:54.492175  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:54.492205  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:54.574882  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:54.574912  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:54.630597  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:54.630622  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:54.671919  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:54.671948  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:54.773060  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:54.773091  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:54.789558  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:54.789585  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:54.855625  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:54.855645  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:07:54.855657  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:54.892529  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:54.892554  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:56.502314  255989 pod_ready.go:94] pod "kube-scheduler-no-preload-769733" is "Ready"
	I1206 09:07:56.502340  255989 pod_ready.go:86] duration metric: took 799.210803ms for pod "kube-scheduler-no-preload-769733" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:07:56.502354  255989 pod_ready.go:40] duration metric: took 2.004743669s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:07:56.546572  255989 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:07:56.548411  255989 out.go:179] * Done! kubectl is now configured to use "no-preload-769733" cluster and "default" namespace by default
	I1206 09:07:53.324060  224160 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.057933012s)
	W1206 09:07:53.324113  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1206 09:07:53.324125  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:53.324140  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:53.356981  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:53.357018  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:53.386228  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:53.386260  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:53.415837  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:53.415864  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:53.479953  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:53.479981  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:53.494313  224160 logs.go:123] Gathering logs for kube-apiserver [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed] ...
	I1206 09:07:53.494382  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:07:53.528286  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:53.528315  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:56.065384  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:07:57.434046  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:07:57.434424  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:07:57.434475  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:57.434514  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:57.471425  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:57.471448  222653 cri.go:89] found id: ""
	I1206 09:07:57.471459  222653 logs.go:282] 1 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400]
	I1206 09:07:57.471513  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.475414  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:57.475480  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:57.513495  222653 cri.go:89] found id: ""
	I1206 09:07:57.513516  222653 logs.go:282] 0 containers: []
	W1206 09:07:57.513526  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:07:57.513534  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:57.513590  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:57.552221  222653 cri.go:89] found id: ""
	I1206 09:07:57.552245  222653 logs.go:282] 0 containers: []
	W1206 09:07:57.552254  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:07:57.552260  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:57.552316  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:57.592366  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:57.592399  222653 cri.go:89] found id: ""
	I1206 09:07:57.592409  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:07:57.592466  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.596281  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:57.596361  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:57.634550  222653 cri.go:89] found id: ""
	I1206 09:07:57.634572  222653 logs.go:282] 0 containers: []
	W1206 09:07:57.634580  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:57.634586  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:57.634629  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:57.672883  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:57.672906  222653 cri.go:89] found id: "68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:07:57.672912  222653 cri.go:89] found id: ""
	I1206 09:07:57.672919  222653 logs.go:282] 2 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda]
	I1206 09:07:57.672978  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.677375  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.680848  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:57.680903  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:57.717170  222653 cri.go:89] found id: ""
	I1206 09:07:57.717195  222653 logs.go:282] 0 containers: []
	W1206 09:07:57.717204  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:57.717212  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:57.717270  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:57.753470  222653 cri.go:89] found id: ""
	I1206 09:07:57.753496  222653 logs.go:282] 0 containers: []
	W1206 09:07:57.753508  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:57.753522  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:57.753534  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:57.851120  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:57.851151  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:57.867952  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:07:57.867977  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:07:57.908424  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:07:57.908452  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:07:57.982785  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:57.982815  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:58.034090  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:07:58.034121  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:58.073310  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:58.073342  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:58.133438  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:58.133464  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:07:58.133478  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:58.169469  222653 logs.go:123] Gathering logs for kube-controller-manager [68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda] ...
	I1206 09:07:58.169500  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68f946be1ee6e8ee3a12302ae251551f4693afa8e10c395172d7e644db329cda"
	I1206 09:08:00.706062  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:08:00.706525  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:08:00.706577  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:08:00.706626  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:08:00.747287  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:08:00.747322  222653 cri.go:89] found id: ""
	I1206 09:08:00.747332  222653 logs.go:282] 1 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400]
	I1206 09:08:00.747389  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.751270  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:08:00.751340  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:08:00.791207  222653 cri.go:89] found id: ""
	I1206 09:08:00.791231  222653 logs.go:282] 0 containers: []
	W1206 09:08:00.791240  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:08:00.791248  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:08:00.791304  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:08:00.826442  222653 cri.go:89] found id: ""
	I1206 09:08:00.826469  222653 logs.go:282] 0 containers: []
	W1206 09:08:00.826477  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:08:00.826488  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:08:00.826536  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:08:00.861433  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:08:00.861454  222653 cri.go:89] found id: ""
	I1206 09:08:00.861461  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:08:00.861535  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.865197  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:08:00.865253  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:08:00.902403  222653 cri.go:89] found id: ""
	I1206 09:08:00.902426  222653 logs.go:282] 0 containers: []
	W1206 09:08:00.902435  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:08:00.902443  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:08:00.902495  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:08:00.946950  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:08:00.946978  222653 cri.go:89] found id: ""
	I1206 09:08:00.947016  222653 logs.go:282] 1 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9]
	I1206 09:08:00.947078  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.951269  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:08:00.951318  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:08:00.996665  222653 cri.go:89] found id: ""
	I1206 09:08:00.996724  222653 logs.go:282] 0 containers: []
	W1206 09:08:00.996734  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:08:00.996743  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:08:00.996806  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:08:01.037261  222653 cri.go:89] found id: ""
	I1206 09:08:01.037288  222653 logs.go:282] 0 containers: []
	W1206 09:08:01.037299  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:08:01.037310  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:08:01.037324  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:08:01.094799  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:08:01.094830  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:08:01.135249  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:08:01.135273  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:08:01.228612  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:08:01.228645  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:08:01.243894  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:08:01.243918  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:08:01.301757  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:08:01.301775  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:08:01.301790  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:08:01.338774  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:08:01.338799  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:08:01.408122  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:08:01.408158  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:07:57.260462  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:44912->192.168.85.2:8443: read: connection reset by peer
	I1206 09:07:57.260532  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:07:57.260592  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:07:57.287454  224160 cri.go:89] found id: "4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:07:57.287474  224160 cri.go:89] found id: "aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	I1206 09:07:57.287477  224160 cri.go:89] found id: ""
	I1206 09:07:57.287486  224160 logs.go:282] 2 containers: [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]
	I1206 09:07:57.287541  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.291958  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.296577  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:07:57.296629  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:07:57.323686  224160 cri.go:89] found id: ""
	I1206 09:07:57.323714  224160 logs.go:282] 0 containers: []
	W1206 09:07:57.323726  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:07:57.323733  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:07:57.323784  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:07:57.349543  224160 cri.go:89] found id: ""
	I1206 09:07:57.349570  224160 logs.go:282] 0 containers: []
	W1206 09:07:57.349583  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:07:57.349591  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:07:57.349663  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:07:57.374735  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:07:57.374760  224160 cri.go:89] found id: ""
	I1206 09:07:57.374770  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:07:57.374828  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.378683  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:07:57.378745  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:07:57.404645  224160 cri.go:89] found id: ""
	I1206 09:07:57.404667  224160 logs.go:282] 0 containers: []
	W1206 09:07:57.404677  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:07:57.404685  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:07:57.404744  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:07:57.430821  224160 cri.go:89] found id: "8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c"
	I1206 09:07:57.430842  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:57.430847  224160 cri.go:89] found id: ""
	I1206 09:07:57.430856  224160 logs.go:282] 2 containers: [8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:07:57.430903  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.434949  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:07:57.438729  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:07:57.438773  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:07:57.464772  224160 cri.go:89] found id: ""
	I1206 09:07:57.464802  224160 logs.go:282] 0 containers: []
	W1206 09:07:57.464813  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:07:57.464822  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:07:57.464876  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:07:57.492323  224160 cri.go:89] found id: ""
	I1206 09:07:57.492346  224160 logs.go:282] 0 containers: []
	W1206 09:07:57.492354  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:07:57.492365  224160 logs.go:123] Gathering logs for kube-controller-manager [8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c] ...
	I1206 09:07:57.492375  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c"
	I1206 09:07:57.520684  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:07:57.520711  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:07:57.553897  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:07:57.553919  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:07:57.618226  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:07:57.618250  224160 logs.go:123] Gathering logs for kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b] ...
	I1206 09:07:57.618266  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	W1206 09:07:57.646180  224160 logs.go:130] failed kube-apiserver [aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:07:57.644005    5790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b\": container with ID starting with aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b not found: ID does not exist" containerID="aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	time="2025-12-06T09:07:57Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b\": container with ID starting with aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b not found: ID does not exist"
	 output: 
	** stderr ** 
	E1206 09:07:57.644005    5790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b\": container with ID starting with aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b not found: ID does not exist" containerID="aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b"
	time="2025-12-06T09:07:57Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b\": container with ID starting with aa584555e0f066b5e9e5bf9427e7cf541e5a25c37b1420c10103b9afbb57731b not found: ID does not exist"
	
	** /stderr **
	I1206 09:07:57.646234  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:07:57.646251  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:07:57.673384  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:07:57.673409  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:07:57.733595  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:07:57.733633  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:07:57.825458  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:07:57.825497  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:07:57.839915  224160 logs.go:123] Gathering logs for kube-apiserver [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed] ...
	I1206 09:07:57.839942  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:07:57.870340  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:07:57.870363  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:08:00.398044  224160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:08:00.398465  224160 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1206 09:08:00.398514  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:08:00.398560  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:08:00.424294  224160 cri.go:89] found id: "4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:08:00.424318  224160 cri.go:89] found id: ""
	I1206 09:08:00.424328  224160 logs.go:282] 1 containers: [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed]
	I1206 09:08:00.424396  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.428437  224160 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:08:00.428504  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:08:00.455373  224160 cri.go:89] found id: ""
	I1206 09:08:00.455411  224160 logs.go:282] 0 containers: []
	W1206 09:08:00.455422  224160 logs.go:284] No container was found matching "etcd"
	I1206 09:08:00.455430  224160 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:08:00.455488  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:08:00.481800  224160 cri.go:89] found id: ""
	I1206 09:08:00.481825  224160 logs.go:282] 0 containers: []
	W1206 09:08:00.481836  224160 logs.go:284] No container was found matching "coredns"
	I1206 09:08:00.481843  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:08:00.481901  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:08:00.509257  224160 cri.go:89] found id: "ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:08:00.509278  224160 cri.go:89] found id: ""
	I1206 09:08:00.509289  224160 logs.go:282] 1 containers: [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b]
	I1206 09:08:00.509346  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.513239  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:08:00.513304  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:08:00.539074  224160 cri.go:89] found id: ""
	I1206 09:08:00.539108  224160 logs.go:282] 0 containers: []
	W1206 09:08:00.539117  224160 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:08:00.539126  224160 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:08:00.539201  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:08:00.565090  224160 cri.go:89] found id: "8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c"
	I1206 09:08:00.565116  224160 cri.go:89] found id: "7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:08:00.565122  224160 cri.go:89] found id: ""
	I1206 09:08:00.565129  224160 logs.go:282] 2 containers: [8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823]
	I1206 09:08:00.565177  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.569162  224160 ssh_runner.go:195] Run: which crictl
	I1206 09:08:00.572841  224160 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:08:00.572894  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:08:00.599292  224160 cri.go:89] found id: ""
	I1206 09:08:00.599321  224160 logs.go:282] 0 containers: []
	W1206 09:08:00.599331  224160 logs.go:284] No container was found matching "kindnet"
	I1206 09:08:00.599339  224160 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:08:00.599399  224160 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:08:00.625646  224160 cri.go:89] found id: ""
	I1206 09:08:00.625668  224160 logs.go:282] 0 containers: []
	W1206 09:08:00.625676  224160 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:08:00.625693  224160 logs.go:123] Gathering logs for kube-apiserver [4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed] ...
	I1206 09:08:00.625708  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e409c361dd16f897d0d4292ed52d39935b17e03e10f9da6ae3aaf6f271c11ed"
	I1206 09:08:00.656023  224160 logs.go:123] Gathering logs for kube-scheduler [ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b] ...
	I1206 09:08:00.656048  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ea9120a6c5274e7da760674f23acc58b57203ed8400e991c65251f99b3e2973b"
	I1206 09:08:00.683333  224160 logs.go:123] Gathering logs for kube-controller-manager [8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c] ...
	I1206 09:08:00.683356  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8563a4c24b691995dd37f3c46b75b0b10d8280cabbb5d4e3fb3b023e65349f1c"
	I1206 09:08:00.708945  224160 logs.go:123] Gathering logs for kube-controller-manager [7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823] ...
	I1206 09:08:00.708976  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7e04f89ad66056622fb9fd799b142ce49e4a1a7fbff01f24a7f1167ca2f1c823"
	I1206 09:08:00.736700  224160 logs.go:123] Gathering logs for container status ...
	I1206 09:08:00.736737  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:08:00.769589  224160 logs.go:123] Gathering logs for kubelet ...
	I1206 09:08:00.769619  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:08:00.869876  224160 logs.go:123] Gathering logs for dmesg ...
	I1206 09:08:00.869900  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:08:00.884372  224160 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:08:00.884401  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:08:00.950457  224160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:08:00.950485  224160 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:08:00.950499  224160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:08:03.945040  222653 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:08:03.945375  222653 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1206 09:08:03.945434  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:08:03.945491  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:08:03.981209  222653 cri.go:89] found id: "6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:08:03.981227  222653 cri.go:89] found id: ""
	I1206 09:08:03.981235  222653 logs.go:282] 1 containers: [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400]
	I1206 09:08:03.981290  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:08:03.985127  222653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 09:08:03.985191  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:08:04.020518  222653 cri.go:89] found id: ""
	I1206 09:08:04.020538  222653 logs.go:282] 0 containers: []
	W1206 09:08:04.020546  222653 logs.go:284] No container was found matching "etcd"
	I1206 09:08:04.020551  222653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 09:08:04.020604  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:08:04.058201  222653 cri.go:89] found id: ""
	I1206 09:08:04.058226  222653 logs.go:282] 0 containers: []
	W1206 09:08:04.058234  222653 logs.go:284] No container was found matching "coredns"
	I1206 09:08:04.058242  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:08:04.058298  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:08:04.094345  222653 cri.go:89] found id: "84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:08:04.094370  222653 cri.go:89] found id: ""
	I1206 09:08:04.094379  222653 logs.go:282] 1 containers: [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0]
	I1206 09:08:04.094432  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:08:04.098373  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:08:04.098439  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:08:04.133471  222653 cri.go:89] found id: ""
	I1206 09:08:04.133497  222653 logs.go:282] 0 containers: []
	W1206 09:08:04.133506  222653 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:08:04.133512  222653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:08:04.133572  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:08:04.168168  222653 cri.go:89] found id: "dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:08:04.168191  222653 cri.go:89] found id: ""
	I1206 09:08:04.168201  222653 logs.go:282] 1 containers: [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9]
	I1206 09:08:04.168258  222653 ssh_runner.go:195] Run: which crictl
	I1206 09:08:04.172440  222653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 09:08:04.172522  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:08:04.207056  222653 cri.go:89] found id: ""
	I1206 09:08:04.207077  222653 logs.go:282] 0 containers: []
	W1206 09:08:04.207084  222653 logs.go:284] No container was found matching "kindnet"
	I1206 09:08:04.207089  222653 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:08:04.207157  222653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:08:04.242249  222653 cri.go:89] found id: ""
	I1206 09:08:04.242276  222653 logs.go:282] 0 containers: []
	W1206 09:08:04.242287  222653 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:08:04.242298  222653 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:08:04.242312  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:08:04.301096  222653 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:08:04.301116  222653 logs.go:123] Gathering logs for kube-apiserver [6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400] ...
	I1206 09:08:04.301132  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d7fbbe5bd90d53bf0d8285ee489045935ce490d82cb464af8ef2d3a1d1400"
	I1206 09:08:04.339214  222653 logs.go:123] Gathering logs for kube-scheduler [84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0] ...
	I1206 09:08:04.339245  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ccded3ab086ff7516b407d934465818e98136a9e47ac07905e13c4d82f0aa0"
	I1206 09:08:04.409814  222653 logs.go:123] Gathering logs for kube-controller-manager [dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9] ...
	I1206 09:08:04.409849  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dff2ebc40307c8da156b06d82ce2a6b6ede16a8bd5e3ac391d68504e6b98f6a9"
	I1206 09:08:04.444400  222653 logs.go:123] Gathering logs for CRI-O ...
	I1206 09:08:04.444425  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 09:08:04.490436  222653 logs.go:123] Gathering logs for container status ...
	I1206 09:08:04.490471  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:08:04.528497  222653 logs.go:123] Gathering logs for kubelet ...
	I1206 09:08:04.528525  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:08:04.627215  222653 logs.go:123] Gathering logs for dmesg ...
	I1206 09:08:04.627247  222653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Dec 06 09:07:54 no-preload-769733 crio[768]: time="2025-12-06T09:07:54.110302822Z" level=info msg="Starting container: a0f7b25b94362f329181cb3e889c0c840b59d77397b8ea34fffa12f7ff8d075f" id=cc67e8d0-0365-4652-9607-e33d1c8490b2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:07:54 no-preload-769733 crio[768]: time="2025-12-06T09:07:54.112410325Z" level=info msg="Started container" PID=2827 containerID=a0f7b25b94362f329181cb3e889c0c840b59d77397b8ea34fffa12f7ff8d075f description=kube-system/coredns-7d764666f9-jllj2/coredns id=cc67e8d0-0365-4652-9607-e33d1c8490b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2489a1de5296140fd73463b25e337b361bc0fba4f247826ea6f846c8f85c7f83
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.042398361Z" level=info msg="Running pod sandbox: default/busybox/POD" id=689d6e6f-f0a7-4b1e-9961-44c6d8b873a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.042484624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.047937049Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1db07be407fbf45d497430e44f77ebc897b0d1fba3c78d9496d289da41927bc3 UID:7c052e39-08e6-442e-970a-3e9534e4ea7b NetNS:/var/run/netns/b2384f6b-5490-4f5b-ae8b-6a1e041aaccb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b8c0}] Aliases:map[]}"
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.047964354Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.058952404Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1db07be407fbf45d497430e44f77ebc897b0d1fba3c78d9496d289da41927bc3 UID:7c052e39-08e6-442e-970a-3e9534e4ea7b NetNS:/var/run/netns/b2384f6b-5490-4f5b-ae8b-6a1e041aaccb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b8c0}] Aliases:map[]}"
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.05920024Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.059962654Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.060829491Z" level=info msg="Ran pod sandbox 1db07be407fbf45d497430e44f77ebc897b0d1fba3c78d9496d289da41927bc3 with infra container: default/busybox/POD" id=689d6e6f-f0a7-4b1e-9961-44c6d8b873a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.062190302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab128c76-e089-4ec6-9313-efda5de67086 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.06232709Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ab128c76-e089-4ec6-9313-efda5de67086 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.062372595Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ab128c76-e089-4ec6-9313-efda5de67086 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.063139279Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37969c9c-9c06-4c9d-8f53-5ae010e0bb72 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:07:57 no-preload-769733 crio[768]: time="2025-12-06T09:07:57.064487274Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.378981365Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=37969c9c-9c06-4c9d-8f53-5ae010e0bb72 name=/runtime.v1.ImageService/PullImage
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.379512958Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fc51c169-0a6e-475e-a32f-6ed87ab294a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.380946954Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6307c914-32b7-4688-aef5-6b1483936ac1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.386190832Z" level=info msg="Creating container: default/busybox/busybox" id=3057e99e-6c40-4721-9905-9ab16559ed81 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.386341714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.389965522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.390418426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.416026236Z" level=info msg="Created container 0c37bd7a153f4c4f41111b6e3cae291a023c436914a5a93c9a2b630d0dd3d12b: default/busybox/busybox" id=3057e99e-6c40-4721-9905-9ab16559ed81 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.416567514Z" level=info msg="Starting container: 0c37bd7a153f4c4f41111b6e3cae291a023c436914a5a93c9a2b630d0dd3d12b" id=a66c74cc-b36d-4d9a-bff5-c5a4a5a311d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:07:58 no-preload-769733 crio[768]: time="2025-12-06T09:07:58.418610037Z" level=info msg="Started container" PID=2905 containerID=0c37bd7a153f4c4f41111b6e3cae291a023c436914a5a93c9a2b630d0dd3d12b description=default/busybox/busybox id=a66c74cc-b36d-4d9a-bff5-c5a4a5a311d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1db07be407fbf45d497430e44f77ebc897b0d1fba3c78d9496d289da41927bc3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c37bd7a153f4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1db07be407fbf       busybox                                     default
	a0f7b25b94362       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   2489a1de52961       coredns-7d764666f9-jllj2                    kube-system
	4a759cf6f7e4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   e02f1937c2d92       storage-provisioner                         kube-system
	0f3f0de1f5d09       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   12ec5a9c97c27       kindnet-7m8h6                               kube-system
	b0f6dd466cd3b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   01892b2cbb7e3       kube-proxy-5jsq2                            kube-system
	3e481497690d7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   528282d9677cd       kube-controller-manager-no-preload-769733   kube-system
	72dce17ead480       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   83cd846350be8       kube-scheduler-no-preload-769733            kube-system
	5f0c7c60dbd19       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   90f74af116826       etcd-no-preload-769733                      kube-system
	6191c587f970d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   cf69d6e5b804e       kube-apiserver-no-preload-769733            kube-system
	
	
	==> coredns [a0f7b25b94362f329181cb3e889c0c840b59d77397b8ea34fffa12f7ff8d075f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56363 - 61087 "HINFO IN 3063249055448241104.1789712077333936200. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084434251s
	
	
	==> describe nodes <==
	Name:               no-preload-769733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-769733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=no-preload-769733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_07_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:07:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-769733
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:08:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:08:07 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:08:07 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:08:07 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:08:07 +0000   Sat, 06 Dec 2025 09:07:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-769733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                102fff6b-fbfc-491f-a5fe-409060b67cce
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-jllj2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-769733                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-7m8h6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-769733             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-769733    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-5jsq2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-769733             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-769733 event: Registered Node no-preload-769733 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [5f0c7c60dbd19f340acf0b39b2e349595a094e7b313060c613639a3361b2b665] <==
	{"level":"warn","ts":"2025-12-06T09:07:32.806452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.812792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.818627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.824935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.844123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.850737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.858584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.871752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.878637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.885727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.892365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.899453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.906032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.912358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.929153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.935856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.942939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.949442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:32.996287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:07:33.849842Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.147296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-769733\" limit:1 ","response":"range_response_count:1 size:3506"}
	{"level":"info","ts":"2025-12-06T09:07:33.849900Z","caller":"traceutil/trace.go:172","msg":"trace[1556817394] range","detail":"{range_begin:/registry/minions/no-preload-769733; range_end:; response_count:1; response_revision:66; }","duration":"142.228382ms","start":"2025-12-06T09:07:33.707660Z","end":"2025-12-06T09:07:33.849889Z","steps":["trace[1556817394] 'range keys from in-memory index tree'  (duration: 142.00151ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:07:34.462802Z","caller":"traceutil/trace.go:172","msg":"trace[129819694] linearizableReadLoop","detail":"{readStateIndex:71; appliedIndex:71; }","duration":"112.131408ms","start":"2025-12-06T09:07:34.350644Z","end":"2025-12-06T09:07:34.462776Z","steps":["trace[129819694] 'read index received'  (duration: 112.122488ms)","trace[129819694] 'applied index is now lower than readState.Index'  (duration: 7.716µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:07:34.463041Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.372651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-06T09:07:34.463055Z","caller":"traceutil/trace.go:172","msg":"trace[560669611] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"112.909347ms","start":"2025-12-06T09:07:34.350115Z","end":"2025-12-06T09:07:34.463025Z","steps":["trace[560669611] 'process raft request'  (duration: 112.715562ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:07:34.463085Z","caller":"traceutil/trace.go:172","msg":"trace[60195125] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:0; response_revision:67; }","duration":"112.435873ms","start":"2025-12-06T09:07:34.350640Z","end":"2025-12-06T09:07:34.463076Z","steps":["trace[60195125] 'agreement among raft nodes before linearized reading'  (duration: 112.263114ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:08:07 up 50 min,  0 user,  load average: 1.57, 2.02, 1.59
	Linux no-preload-769733 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f3f0de1f5d09178419f5a047f9c6a807e065adda923ddcc2abd1dee620c489f] <==
	I1206 09:07:43.326218       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:07:43.326516       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:07:43.326639       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:07:43.326653       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:07:43.326671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:07:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:07:43.526643       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:07:43.526667       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:07:43.526678       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:07:43.527322       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:07:43.927297       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:07:43.927322       1 metrics.go:72] Registering metrics
	I1206 09:07:43.927388       1 controller.go:711] "Syncing nftables rules"
	I1206 09:07:53.527457       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:07:53.527560       1 main.go:301] handling current node
	I1206 09:08:03.528122       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:08:03.528168       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6191c587f970dceced8ac4400b8cd3cb2fc1694a69e0ed3211f3a359328d74d5] <==
	I1206 09:07:33.445730       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:07:33.445800       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1206 09:07:33.446707       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:33.446761       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:33.450164       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:07:33.631216       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:07:34.464026       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1206 09:07:34.559593       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:07:34.559612       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:07:35.290779       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:07:35.328485       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:07:35.377733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:07:35.449895       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:07:35.455074       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1206 09:07:35.456052       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:07:35.459848       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:07:36.331834       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:07:36.340178       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:07:36.348446       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:07:40.881976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:07:40.885478       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:07:41.280881       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:07:41.382313       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1206 09:07:41.382313       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1206 09:08:05.821953       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:41992: use of closed network connection
	
	
	==> kube-controller-manager [3e481497690d70540933f497bba3c7ce7b0aaadf371f5f618cf272724617a28b] <==
	I1206 09:07:40.183508       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185078       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185231       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185253       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185402       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185503       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185551       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.185780       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:07:40.185953       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-769733"
	I1206 09:07:40.186015       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:07:40.183596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.183605       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.187132       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.188472       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.188556       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.188732       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.188596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.189268       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.194514       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-769733" podCIDRs=["10.244.0.0/24"]
	I1206 09:07:40.195042       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:07:40.284473       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:40.284493       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:07:40.284500       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:07:40.295853       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:55.188611       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [b0f6dd466cd3b0ce98638b383c465d90d468726687208bc5ab30a5e2cbf8be05] <==
	I1206 09:07:41.834666       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:07:41.903818       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:07:42.004117       1 shared_informer.go:377] "Caches are synced"
	I1206 09:07:42.004164       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:07:42.004287       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:07:42.035167       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:07:42.035364       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:07:42.045644       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:07:42.046483       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:07:42.046743       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:07:42.050325       1 config.go:200] "Starting service config controller"
	I1206 09:07:42.053493       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:07:42.051438       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:07:42.053565       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:07:42.051874       1 config.go:309] "Starting node config controller"
	I1206 09:07:42.053596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:07:42.053601       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:07:42.051461       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:07:42.053611       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:07:42.154267       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:07:42.154285       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:07:42.154312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [72dce17ead480d267e2b9c03977b4072d8073e97405bf66b5848a6a830a21cf6] <==
	E1206 09:07:34.432737       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:07:34.433823       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1206 09:07:34.494689       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:07:34.495687       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1206 09:07:34.580486       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1206 09:07:34.581505       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1206 09:07:34.737111       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1206 09:07:34.737952       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:07:34.774474       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:07:34.775635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1206 09:07:34.806637       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1206 09:07:34.807874       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1206 09:07:34.824237       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1206 09:07:34.825205       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1206 09:07:34.833451       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:07:34.834541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:07:34.945020       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1206 09:07:34.946031       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1206 09:07:34.961410       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:07:34.962445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1206 09:07:34.983715       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:07:34.984732       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1206 09:07:34.987773       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:07:34.988826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1206 09:07:37.584562       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:07:41 no-preload-769733 kubelet[2235]: I1206 09:07:41.476781    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/858f0c55-78ac-45ce-9824-f34d3de8cdc6-xtables-lock\") pod \"kindnet-7m8h6\" (UID: \"858f0c55-78ac-45ce-9824-f34d3de8cdc6\") " pod="kube-system/kindnet-7m8h6"
	Dec 06 09:07:41 no-preload-769733 kubelet[2235]: I1206 09:07:41.476799    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba99eecc-e4e4-4861-9a7c-ab51b62684bf-xtables-lock\") pod \"kube-proxy-5jsq2\" (UID: \"ba99eecc-e4e4-4861-9a7c-ab51b62684bf\") " pod="kube-system/kube-proxy-5jsq2"
	Dec 06 09:07:41 no-preload-769733 kubelet[2235]: I1206 09:07:41.476814    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba99eecc-e4e4-4861-9a7c-ab51b62684bf-lib-modules\") pod \"kube-proxy-5jsq2\" (UID: \"ba99eecc-e4e4-4861-9a7c-ab51b62684bf\") " pod="kube-system/kube-proxy-5jsq2"
	Dec 06 09:07:41 no-preload-769733 kubelet[2235]: I1206 09:07:41.476828    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/858f0c55-78ac-45ce-9824-f34d3de8cdc6-lib-modules\") pod \"kindnet-7m8h6\" (UID: \"858f0c55-78ac-45ce-9824-f34d3de8cdc6\") " pod="kube-system/kindnet-7m8h6"
	Dec 06 09:07:41 no-preload-769733 kubelet[2235]: I1206 09:07:41.476843    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdkt\" (UniqueName: \"kubernetes.io/projected/858f0c55-78ac-45ce-9824-f34d3de8cdc6-kube-api-access-xhdkt\") pod \"kindnet-7m8h6\" (UID: \"858f0c55-78ac-45ce-9824-f34d3de8cdc6\") " pod="kube-system/kindnet-7m8h6"
	Dec 06 09:07:42 no-preload-769733 kubelet[2235]: I1206 09:07:42.195837    2235 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-5jsq2" podStartSLOduration=1.1958198580000001 podStartE2EDuration="1.195819858s" podCreationTimestamp="2025-12-06 09:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:42.195724601 +0000 UTC m=+6.123438536" watchObservedRunningTime="2025-12-06 09:07:42.195819858 +0000 UTC m=+6.123533794"
	Dec 06 09:07:42 no-preload-769733 kubelet[2235]: E1206 09:07:42.386795    2235 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-769733" containerName="kube-controller-manager"
	Dec 06 09:07:46 no-preload-769733 kubelet[2235]: E1206 09:07:46.010517    2235 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-769733" containerName="kube-scheduler"
	Dec 06 09:07:46 no-preload-769733 kubelet[2235]: I1206 09:07:46.021640    2235 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-7m8h6" podStartSLOduration=3.717257997 podStartE2EDuration="5.021621534s" podCreationTimestamp="2025-12-06 09:07:41 +0000 UTC" firstStartedPulling="2025-12-06 09:07:41.736973663 +0000 UTC m=+5.664687592" lastFinishedPulling="2025-12-06 09:07:43.041337214 +0000 UTC m=+6.969051129" observedRunningTime="2025-12-06 09:07:43.20513714 +0000 UTC m=+7.132851112" watchObservedRunningTime="2025-12-06 09:07:46.021621534 +0000 UTC m=+9.949335469"
	Dec 06 09:07:46 no-preload-769733 kubelet[2235]: E1206 09:07:46.807542    2235 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-769733" containerName="kube-apiserver"
	Dec 06 09:07:50 no-preload-769733 kubelet[2235]: E1206 09:07:50.585879    2235 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-769733" containerName="etcd"
	Dec 06 09:07:52 no-preload-769733 kubelet[2235]: E1206 09:07:52.391253    2235 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-769733" containerName="kube-controller-manager"
	Dec 06 09:07:53 no-preload-769733 kubelet[2235]: I1206 09:07:53.728954    2235 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 06 09:07:53 no-preload-769733 kubelet[2235]: I1206 09:07:53.868032    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0a6b38d2-2d16-482f-a0eb-c386f48ac1ca-tmp\") pod \"storage-provisioner\" (UID: \"0a6b38d2-2d16-482f-a0eb-c386f48ac1ca\") " pod="kube-system/storage-provisioner"
	Dec 06 09:07:53 no-preload-769733 kubelet[2235]: I1206 09:07:53.868079    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krxqn\" (UniqueName: \"kubernetes.io/projected/0a6b38d2-2d16-482f-a0eb-c386f48ac1ca-kube-api-access-krxqn\") pod \"storage-provisioner\" (UID: \"0a6b38d2-2d16-482f-a0eb-c386f48ac1ca\") " pod="kube-system/storage-provisioner"
	Dec 06 09:07:53 no-preload-769733 kubelet[2235]: I1206 09:07:53.868113    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60e2b794-62a6-4c6e-b48b-4d95862a11d4-config-volume\") pod \"coredns-7d764666f9-jllj2\" (UID: \"60e2b794-62a6-4c6e-b48b-4d95862a11d4\") " pod="kube-system/coredns-7d764666f9-jllj2"
	Dec 06 09:07:53 no-preload-769733 kubelet[2235]: I1206 09:07:53.868334    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsqn\" (UniqueName: \"kubernetes.io/projected/60e2b794-62a6-4c6e-b48b-4d95862a11d4-kube-api-access-mdsqn\") pod \"coredns-7d764666f9-jllj2\" (UID: \"60e2b794-62a6-4c6e-b48b-4d95862a11d4\") " pod="kube-system/coredns-7d764666f9-jllj2"
	Dec 06 09:07:54 no-preload-769733 kubelet[2235]: E1206 09:07:54.210686    2235 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jllj2" containerName="coredns"
	Dec 06 09:07:54 no-preload-769733 kubelet[2235]: I1206 09:07:54.238345    2235 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jllj2" podStartSLOduration=13.238322349 podStartE2EDuration="13.238322349s" podCreationTimestamp="2025-12-06 09:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:54.222954777 +0000 UTC m=+18.150668713" watchObservedRunningTime="2025-12-06 09:07:54.238322349 +0000 UTC m=+18.166036288"
	Dec 06 09:07:54 no-preload-769733 kubelet[2235]: I1206 09:07:54.250729    2235 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.250707849 podStartE2EDuration="13.250707849s" podCreationTimestamp="2025-12-06 09:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:54.250509068 +0000 UTC m=+18.178223004" watchObservedRunningTime="2025-12-06 09:07:54.250707849 +0000 UTC m=+18.178421786"
	Dec 06 09:07:55 no-preload-769733 kubelet[2235]: E1206 09:07:55.214716    2235 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jllj2" containerName="coredns"
	Dec 06 09:07:56 no-preload-769733 kubelet[2235]: E1206 09:07:56.015593    2235 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-769733" containerName="kube-scheduler"
	Dec 06 09:07:56 no-preload-769733 kubelet[2235]: E1206 09:07:56.216959    2235 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jllj2" containerName="coredns"
	Dec 06 09:07:56 no-preload-769733 kubelet[2235]: I1206 09:07:56.786844    2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jqj\" (UniqueName: \"kubernetes.io/projected/7c052e39-08e6-442e-970a-3e9534e4ea7b-kube-api-access-p2jqj\") pod \"busybox\" (UID: \"7c052e39-08e6-442e-970a-3e9534e4ea7b\") " pod="default/busybox"
	Dec 06 09:07:59 no-preload-769733 kubelet[2235]: I1206 09:07:59.234569    2235 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9170869050000001 podStartE2EDuration="3.234551623s" podCreationTimestamp="2025-12-06 09:07:56 +0000 UTC" firstStartedPulling="2025-12-06 09:07:57.06277055 +0000 UTC m=+20.990484479" lastFinishedPulling="2025-12-06 09:07:58.380235279 +0000 UTC m=+22.307949197" observedRunningTime="2025-12-06 09:07:59.234437872 +0000 UTC m=+23.162151807" watchObservedRunningTime="2025-12-06 09:07:59.234551623 +0000 UTC m=+23.162265561"
	
	
	==> storage-provisioner [4a759cf6f7e4b68c39e4516da8d9abe3d12c57fb28ff41ebfe9375d947441624] <==
	I1206 09:07:54.114357       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:07:54.123872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:07:54.123960       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:07:54.126120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:54.130845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:07:54.131068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:07:54.131226       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-769733_837959ae-553f-4b43-8b1a-a87f4647dbf7!
	I1206 09:07:54.131205       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dce96d18-3f62-41ad-9342-98788d1eeae0", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-769733_837959ae-553f-4b43-8b1a-a87f4647dbf7 became leader
	W1206 09:07:54.133447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:54.137657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:07:54.234398       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-769733_837959ae-553f-4b43-8b1a-a87f4647dbf7!
	W1206 09:07:56.140747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:56.144426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:58.147589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:58.151557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:00.154982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:00.159107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:02.162398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:02.166014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:04.169493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:04.175615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:06.179166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:06.183677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-769733 -n no-preload-769733
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-769733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-769733 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-769733 --alsologtostderr -v=1: exit status 80 (1.949004842s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-769733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:09:04.941183  281737 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:04.941476  281737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:04.941486  281737 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:04.941492  281737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:04.941678  281737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:04.942061  281737 out.go:368] Setting JSON to false
	I1206 09:09:04.942088  281737 mustload.go:66] Loading cluster: no-preload-769733
	I1206 09:09:04.942575  281737 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:04.943140  281737 cli_runner.go:164] Run: docker container inspect no-preload-769733 --format={{.State.Status}}
	I1206 09:09:04.965655  281737 host.go:66] Checking if "no-preload-769733" exists ...
	I1206 09:09:04.965958  281737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:05.031188  281737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:09:05.019920016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:05.031874  281737 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-769733 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:09:05.033571  281737 out.go:179] * Pausing node no-preload-769733 ... 
	I1206 09:09:05.034801  281737 host.go:66] Checking if "no-preload-769733" exists ...
	I1206 09:09:05.035072  281737 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:05.035125  281737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-769733
	I1206 09:09:05.058772  281737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/no-preload-769733/id_rsa Username:docker}
	I1206 09:09:05.154877  281737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:05.167893  281737 pause.go:52] kubelet running: true
	I1206 09:09:05.167976  281737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:05.334220  281737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:05.334309  281737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:05.409057  281737 cri.go:89] found id: "3c32925f8ca5ebb1fe444e961d3933b6ab892cbdaa78228462c024661935c5e9"
	I1206 09:09:05.409081  281737 cri.go:89] found id: "13a307ecfdb80882ff1f7a7ff010df7e0ecb383b512e2d747867e48a8c753e32"
	I1206 09:09:05.409086  281737 cri.go:89] found id: "60efef5c2c46ca4f5242aa414ab8357c06d3bd4857bb484da8a7a38eef2ce888"
	I1206 09:09:05.409091  281737 cri.go:89] found id: "ec86b1cf9e1633d8e898b3b09622c6c417c612e379a6bd67b0c3a5ffbb3629f8"
	I1206 09:09:05.409094  281737 cri.go:89] found id: "d0afbad498b737f9a326bbd880b916eec7f277a7a2d1bbc94bf74de4be59afa5"
	I1206 09:09:05.409099  281737 cri.go:89] found id: "64e6d8d09b82ddda9e75d07401cf42a63194cca950c2d205b309768e43ff4df9"
	I1206 09:09:05.409104  281737 cri.go:89] found id: "ddbc73ad7fa5d9113b7d32a21294ac2f252d10dea2e4d287294a4318592979c5"
	I1206 09:09:05.409108  281737 cri.go:89] found id: "2822e91cc3bdbcb4954732f408d6dd842598613c1df6081098b5a50c9e302aa6"
	I1206 09:09:05.409112  281737 cri.go:89] found id: "5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd"
	I1206 09:09:05.409122  281737 cri.go:89] found id: "6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d"
	I1206 09:09:05.409127  281737 cri.go:89] found id: ""
	I1206 09:09:05.409173  281737 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:05.421404  281737 retry.go:31] will retry after 364.604308ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:05Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:05.787054  281737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:05.800823  281737 pause.go:52] kubelet running: false
	I1206 09:09:05.800883  281737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:05.949158  281737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:05.949276  281737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:06.017903  281737 cri.go:89] found id: "3c32925f8ca5ebb1fe444e961d3933b6ab892cbdaa78228462c024661935c5e9"
	I1206 09:09:06.017921  281737 cri.go:89] found id: "13a307ecfdb80882ff1f7a7ff010df7e0ecb383b512e2d747867e48a8c753e32"
	I1206 09:09:06.017925  281737 cri.go:89] found id: "60efef5c2c46ca4f5242aa414ab8357c06d3bd4857bb484da8a7a38eef2ce888"
	I1206 09:09:06.017928  281737 cri.go:89] found id: "ec86b1cf9e1633d8e898b3b09622c6c417c612e379a6bd67b0c3a5ffbb3629f8"
	I1206 09:09:06.017931  281737 cri.go:89] found id: "d0afbad498b737f9a326bbd880b916eec7f277a7a2d1bbc94bf74de4be59afa5"
	I1206 09:09:06.017934  281737 cri.go:89] found id: "64e6d8d09b82ddda9e75d07401cf42a63194cca950c2d205b309768e43ff4df9"
	I1206 09:09:06.017937  281737 cri.go:89] found id: "ddbc73ad7fa5d9113b7d32a21294ac2f252d10dea2e4d287294a4318592979c5"
	I1206 09:09:06.017940  281737 cri.go:89] found id: "2822e91cc3bdbcb4954732f408d6dd842598613c1df6081098b5a50c9e302aa6"
	I1206 09:09:06.017943  281737 cri.go:89] found id: "5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd"
	I1206 09:09:06.017957  281737 cri.go:89] found id: "6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d"
	I1206 09:09:06.017962  281737 cri.go:89] found id: ""
	I1206 09:09:06.018020  281737 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:06.029860  281737 retry.go:31] will retry after 527.755431ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:06Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:06.558183  281737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:06.572124  281737 pause.go:52] kubelet running: false
	I1206 09:09:06.572201  281737 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:06.719555  281737 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:06.719633  281737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:06.791488  281737 cri.go:89] found id: "3c32925f8ca5ebb1fe444e961d3933b6ab892cbdaa78228462c024661935c5e9"
	I1206 09:09:06.791513  281737 cri.go:89] found id: "13a307ecfdb80882ff1f7a7ff010df7e0ecb383b512e2d747867e48a8c753e32"
	I1206 09:09:06.791518  281737 cri.go:89] found id: "60efef5c2c46ca4f5242aa414ab8357c06d3bd4857bb484da8a7a38eef2ce888"
	I1206 09:09:06.791523  281737 cri.go:89] found id: "ec86b1cf9e1633d8e898b3b09622c6c417c612e379a6bd67b0c3a5ffbb3629f8"
	I1206 09:09:06.791527  281737 cri.go:89] found id: "d0afbad498b737f9a326bbd880b916eec7f277a7a2d1bbc94bf74de4be59afa5"
	I1206 09:09:06.791533  281737 cri.go:89] found id: "64e6d8d09b82ddda9e75d07401cf42a63194cca950c2d205b309768e43ff4df9"
	I1206 09:09:06.791537  281737 cri.go:89] found id: "ddbc73ad7fa5d9113b7d32a21294ac2f252d10dea2e4d287294a4318592979c5"
	I1206 09:09:06.791541  281737 cri.go:89] found id: "2822e91cc3bdbcb4954732f408d6dd842598613c1df6081098b5a50c9e302aa6"
	I1206 09:09:06.791546  281737 cri.go:89] found id: "5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd"
	I1206 09:09:06.791555  281737 cri.go:89] found id: "6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d"
	I1206 09:09:06.791563  281737 cri.go:89] found id: ""
	I1206 09:09:06.791607  281737 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:06.806506  281737 out.go:203] 
	W1206 09:09:06.807784  281737 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:09:06.807808  281737 out.go:285] * 
	* 
	W1206 09:09:06.814517  281737 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:09:06.815919  281737 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-769733 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-769733
helpers_test.go:243: (dbg) docker inspect no-preload-769733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01",
	        "Created": "2025-12-06T09:07:11.630466318Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:08:26.794910158Z",
	            "FinishedAt": "2025-12-06T09:08:25.895659891Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/hosts",
	        "LogPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01-json.log",
	        "Name": "/no-preload-769733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-769733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-769733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01",
	                "LowerDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-769733",
	                "Source": "/var/lib/docker/volumes/no-preload-769733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-769733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-769733",
	                "name.minikube.sigs.k8s.io": "no-preload-769733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "89e36bb5782d56a7227763d37b25445fea444c0cebfce8e76c37e438916eaf35",
	            "SandboxKey": "/var/run/docker/netns/89e36bb5782d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-769733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d145c702ed0f08c782b680a462b6b5a0d8a60b36a26fd7d3512cd90419c2ab9",
	                    "EndpointID": "a50f203da5d1d47288a30f672244bdd9aa55ab8fde1ad41310b86f6bf9aa6330",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ee:90:21:7d:0b:2c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-769733",
	                        "2b0a9b7f20f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733: exit status 2 (341.567472ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-769733 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-769733 logs -n 25: (1.183784941s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-646473 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-646473             │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-646473             │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p cilium-646473 sudo crio config                                                                                                                                                                                                             │ cilium-646473             │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │                     │
	│ delete  │ -p cilium-646473                                                                                                                                                                                                                              │ cilium-646473             │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:06 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324    │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ stop    │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079       │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p NoKubernetes-328079 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-328079       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ -p NoKubernetes-328079 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-328079       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ delete  │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-322324    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-322324 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-322324    │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ stop    │ -p no-preload-769733 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-322324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-322324    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                     │ stopped-upgrade-454433    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-931091        │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-702638 │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-702638 │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                    │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-769733         │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                  │ kubernetes-upgrade-702638 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:08:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:08:58.540655  279021 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:08:58.540895  279021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:08:58.540904  279021 out.go:374] Setting ErrFile to fd 2...
	I1206 09:08:58.540908  279021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:08:58.541157  279021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:08:58.541570  279021 out.go:368] Setting JSON to false
	I1206 09:08:58.542664  279021 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3090,"bootTime":1765009049,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:08:58.542716  279021 start.go:143] virtualization: kvm guest
	I1206 09:08:58.549568  279021 out.go:179] * [kubernetes-upgrade-702638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:08:58.551381  279021 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:08:58.551374  279021 notify.go:221] Checking for updates...
	I1206 09:08:58.554103  279021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:08:58.555590  279021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:08:58.557041  279021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:08:58.561497  279021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:08:58.563123  279021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:08:58.564936  279021 config.go:182] Loaded profile config "kubernetes-upgrade-702638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:08:58.565523  279021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:08:58.591732  279021 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:08:58.591833  279021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:08:58.652577  279021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-06 09:08:58.642165459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:08:58.652715  279021 docker.go:319] overlay module found
	I1206 09:08:58.659494  279021 out.go:179] * Using the docker driver based on existing profile
	I1206 09:08:58.660890  279021 start.go:309] selected driver: docker
	I1206 09:08:58.660907  279021 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-702638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-702638 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:08:58.661014  279021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:08:58.661570  279021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:08:58.727054  279021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-06 09:08:58.716553592 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:08:58.727465  279021 cni.go:84] Creating CNI manager for ""
	I1206 09:08:58.727552  279021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:08:58.727604  279021 start.go:353] cluster config:
	{Name:kubernetes-upgrade-702638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-702638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:08:58.730578  279021 out.go:179] * Starting "kubernetes-upgrade-702638" primary control-plane node in "kubernetes-upgrade-702638" cluster
	I1206 09:08:58.731796  279021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:08:58.733147  279021 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:08:58.734226  279021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:08:58.734264  279021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:08:58.734274  279021 cache.go:65] Caching tarball of preloaded images
	I1206 09:08:58.734324  279021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:08:58.734419  279021 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:08:58.734437  279021 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:08:58.734564  279021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/config.json ...
	I1206 09:08:58.755357  279021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:08:58.755374  279021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:08:58.755390  279021 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:08:58.755423  279021 start.go:360] acquireMachinesLock for kubernetes-upgrade-702638: {Name:mk6a5fc7b95c5c53ac1a6f3a2c491f1d0af97575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:08:58.755482  279021 start.go:364] duration metric: took 39.413µs to acquireMachinesLock for "kubernetes-upgrade-702638"
	I1206 09:08:58.755508  279021 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:08:58.755516  279021 fix.go:54] fixHost starting: 
	I1206 09:08:58.755711  279021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-702638 --format={{.State.Status}}
	I1206 09:08:58.774429  279021 fix.go:112] recreateIfNeeded on kubernetes-upgrade-702638: state=Running err=<nil>
	W1206 09:08:58.774455  279021 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:08:56.263266  278230 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:08:56.263497  278230 start.go:159] libmachine.API.Create for "embed-certs-931091" (driver="docker")
	I1206 09:08:56.263531  278230 client.go:173] LocalClient.Create starting
	I1206 09:08:56.263618  278230 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:08:56.263656  278230 main.go:143] libmachine: Decoding PEM data...
	I1206 09:08:56.263680  278230 main.go:143] libmachine: Parsing certificate...
	I1206 09:08:56.263757  278230 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:08:56.263792  278230 main.go:143] libmachine: Decoding PEM data...
	I1206 09:08:56.263812  278230 main.go:143] libmachine: Parsing certificate...
	I1206 09:08:56.264201  278230 cli_runner.go:164] Run: docker network inspect embed-certs-931091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:08:56.284177  278230 cli_runner.go:211] docker network inspect embed-certs-931091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:08:56.284254  278230 network_create.go:284] running [docker network inspect embed-certs-931091] to gather additional debugging logs...
	I1206 09:08:56.284271  278230 cli_runner.go:164] Run: docker network inspect embed-certs-931091
	W1206 09:08:56.300716  278230 cli_runner.go:211] docker network inspect embed-certs-931091 returned with exit code 1
	I1206 09:08:56.300743  278230 network_create.go:287] error running [docker network inspect embed-certs-931091]: docker network inspect embed-certs-931091: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-931091 not found
	I1206 09:08:56.300755  278230 network_create.go:289] output of [docker network inspect embed-certs-931091]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-931091 not found
	
	** /stderr **
	I1206 09:08:56.300855  278230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:08:56.321092  278230 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:08:56.322120  278230 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:08:56.323188  278230 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:08:56.323912  278230 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f6aeaf0351aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:f6:31:65:11:00} reservation:<nil>}
	I1206 09:08:56.324607  278230 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6a656c6b5a08 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:de:88:d9:0b:15} reservation:<nil>}
	I1206 09:08:56.325406  278230 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2d145c702ed0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7a:16:69:36:0a:30} reservation:<nil>}
	I1206 09:08:56.326566  278230 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb9ef0}
	I1206 09:08:56.326602  278230 network_create.go:124] attempt to create docker network embed-certs-931091 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1206 09:08:56.326664  278230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-931091 embed-certs-931091
	I1206 09:08:56.378086  278230 network_create.go:108] docker network embed-certs-931091 192.168.103.0/24 created
	I1206 09:08:56.378118  278230 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-931091" container
	I1206 09:08:56.378181  278230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:08:56.395807  278230 cli_runner.go:164] Run: docker volume create embed-certs-931091 --label name.minikube.sigs.k8s.io=embed-certs-931091 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:08:56.413871  278230 oci.go:103] Successfully created a docker volume embed-certs-931091
	I1206 09:08:56.413969  278230 cli_runner.go:164] Run: docker run --rm --name embed-certs-931091-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-931091 --entrypoint /usr/bin/test -v embed-certs-931091:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:08:56.856929  278230 oci.go:107] Successfully prepared a docker volume embed-certs-931091
	I1206 09:08:56.857033  278230 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:08:56.857050  278230 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:08:56.857121  278230 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-931091:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:08:58.777905  279021 out.go:252] * Updating the running docker "kubernetes-upgrade-702638" container ...
	I1206 09:08:58.777951  279021 machine.go:94] provisionDockerMachine start ...
	I1206 09:08:58.778051  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:08:58.798537  279021 main.go:143] libmachine: Using SSH client type: native
	I1206 09:08:58.798877  279021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1206 09:08:58.798895  279021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:08:58.930585  279021 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-702638
	
	I1206 09:08:58.930613  279021 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-702638"
	I1206 09:08:58.930672  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:08:58.950559  279021 main.go:143] libmachine: Using SSH client type: native
	I1206 09:08:58.950809  279021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1206 09:08:58.950823  279021 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-702638 && echo "kubernetes-upgrade-702638" | sudo tee /etc/hostname
	I1206 09:08:59.093870  279021 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-702638
	
	I1206 09:08:59.093943  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:08:59.116214  279021 main.go:143] libmachine: Using SSH client type: native
	I1206 09:08:59.116427  279021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1206 09:08:59.116440  279021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-702638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-702638/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-702638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:08:59.247680  279021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:08:59.247710  279021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:08:59.247747  279021 ubuntu.go:190] setting up certificates
	I1206 09:08:59.247757  279021 provision.go:84] configureAuth start
	I1206 09:08:59.247825  279021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-702638
	I1206 09:08:59.266357  279021 provision.go:143] copyHostCerts
	I1206 09:08:59.266421  279021 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:08:59.266433  279021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:08:59.266502  279021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:08:59.266626  279021 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:08:59.266641  279021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:08:59.266669  279021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:08:59.266738  279021 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:08:59.266745  279021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:08:59.266767  279021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:08:59.266828  279021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-702638 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-702638 localhost minikube]
	I1206 09:08:59.721647  279021 provision.go:177] copyRemoteCerts
	I1206 09:08:59.721719  279021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:08:59.721752  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:08:59.740515  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:08:59.835468  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:08:59.852812  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 09:08:59.870076  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:08:59.887181  279021 provision.go:87] duration metric: took 639.40849ms to configureAuth
	I1206 09:08:59.887206  279021 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:08:59.887348  279021 config.go:182] Loaded profile config "kubernetes-upgrade-702638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:08:59.887437  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:08:59.906149  279021 main.go:143] libmachine: Using SSH client type: native
	I1206 09:08:59.906392  279021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1206 09:08:59.906416  279021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:09:01.479481  279021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:09:01.479512  279021 machine.go:97] duration metric: took 2.701553646s to provisionDockerMachine
	I1206 09:09:01.479525  279021 start.go:293] postStartSetup for "kubernetes-upgrade-702638" (driver="docker")
	I1206 09:09:01.479539  279021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:09:01.479604  279021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:09:01.479652  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:09:01.502048  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:09:01.604316  279021 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:09:01.608669  279021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:09:01.608703  279021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:09:01.608715  279021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:09:01.608768  279021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:09:01.608860  279021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:09:01.608980  279021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:09:01.616926  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:01.635026  279021 start.go:296] duration metric: took 155.485305ms for postStartSetup
	I1206 09:09:01.635109  279021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:09:01.635162  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:09:01.656074  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:09:01.761555  279021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:09:01.771255  279021 fix.go:56] duration metric: took 3.015732996s for fixHost
	I1206 09:09:01.771281  279021 start.go:83] releasing machines lock for "kubernetes-upgrade-702638", held for 3.01578528s
	I1206 09:09:01.771345  279021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-702638
	I1206 09:09:01.800059  279021 ssh_runner.go:195] Run: cat /version.json
	I1206 09:09:01.800097  279021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:09:01.800114  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:09:01.800180  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:09:01.824126  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:09:01.825464  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:09:01.992341  279021 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:01.999462  279021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:09:02.039327  279021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:09:02.045010  279021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:09:02.045093  279021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:09:02.054865  279021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:09:02.054890  279021 start.go:496] detecting cgroup driver to use...
	I1206 09:09:02.054929  279021 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:09:02.054972  279021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:09:02.071915  279021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:09:02.089248  279021 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:09:02.089308  279021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:09:02.106285  279021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:09:02.121836  279021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:09:02.246300  279021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:09:02.355259  279021 docker.go:234] disabling docker service ...
	I1206 09:09:02.355327  279021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:09:02.369402  279021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:09:02.382349  279021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:09:02.491196  279021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:09:02.613632  279021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:09:02.633292  279021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:09:02.651847  279021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:09:02.651901  279021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.662562  279021 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:09:02.662634  279021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.672900  279021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.682894  279021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.693313  279021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:09:02.703061  279021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.713277  279021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.724129  279021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:02.734847  279021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:09:02.745280  279021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:09:02.761256  279021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:02.896152  279021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:09:03.085733  279021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:09:03.085795  279021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:09:03.089967  279021 start.go:564] Will wait 60s for crictl version
	I1206 09:09:03.090035  279021 ssh_runner.go:195] Run: which crictl
	I1206 09:09:03.093781  279021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:09:03.119023  279021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:09:03.119101  279021 ssh_runner.go:195] Run: crio --version
	I1206 09:09:03.148919  279021 ssh_runner.go:195] Run: crio --version
	I1206 09:09:03.179857  279021 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 09:09:03.181298  279021 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-702638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:03.201602  279021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:03.206146  279021 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-702638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-702638 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:03.206263  279021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:03.206304  279021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:03.238178  279021 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:03.238196  279021 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:03.238236  279021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:03.265008  279021 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:03.265035  279021 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:03.265044  279021 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:09:03.265141  279021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-702638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-702638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:03.265201  279021 ssh_runner.go:195] Run: crio config
	I1206 09:09:03.312300  279021 cni.go:84] Creating CNI manager for ""
	I1206 09:09:03.312323  279021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:03.312343  279021 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:09:03.312376  279021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-702638 NodeName:kubernetes-upgrade-702638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:03.312552  279021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-702638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:03.312628  279021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:09:03.320891  279021 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:03.320974  279021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:03.330052  279021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1206 09:09:03.347677  279021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:09:03.362917  279021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1206 09:09:03.377016  279021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:03.380729  279021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:03.497391  279021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:03.513213  279021 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638 for IP: 192.168.85.2
	I1206 09:09:03.513231  279021 certs.go:195] generating shared ca certs ...
	I1206 09:09:03.513249  279021 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:03.513395  279021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:03.513446  279021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:03.513457  279021 certs.go:257] generating profile certs ...
	I1206 09:09:03.513565  279021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.key
	I1206 09:09:03.513617  279021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/apiserver.key.8f0643fe
	I1206 09:09:03.513661  279021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/proxy-client.key
	I1206 09:09:03.513764  279021 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:03.513793  279021 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:03.513802  279021 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:03.513824  279021 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:03.513849  279021 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:03.513872  279021 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:03.513917  279021 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:03.514495  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:03.533159  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:03.552852  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:03.572066  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:03.591176  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 09:09:03.609301  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:09:03.627818  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:03.646822  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:09:03.666271  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:03.689938  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:03.708165  279021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:03.726038  279021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:03.738959  279021 ssh_runner.go:195] Run: openssl version
	I1206 09:09:03.745243  279021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:03.753024  279021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:03.760828  279021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:03.764998  279021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:03.765063  279021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:03.806099  279021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:03.814426  279021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:03.822227  279021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:03.830001  279021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:03.834214  279021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:03.834265  279021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:03.870719  279021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:03.878604  279021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:03.886286  279021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:03.893923  279021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:03.898980  279021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:03.899070  279021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:03.938040  279021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:03.945837  279021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:03.949677  279021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:09:03.988630  279021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:09:04.025541  279021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:09:04.064234  279021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:09:04.102862  279021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:09:04.141808  279021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:09:04.180977  279021 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-702638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-702638 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:04.181103  279021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:04.181152  279021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:04.217014  279021 cri.go:89] found id: "7a26e5f9af2a7c28b93fcf0514390d00f4068c353984baeffd1d4cbd5e89f6dd"
	I1206 09:09:04.217048  279021 cri.go:89] found id: "6dbea81c889731c49a65dac2f8e663f47e83de85388c25e6083e64cc3f958759"
	I1206 09:09:04.217082  279021 cri.go:89] found id: "603ebe6ede0345133a68eee0500f13bdfdf52dfae0a3b77c2cb66ee0e83a5872"
	I1206 09:09:04.217089  279021 cri.go:89] found id: "3d7b51931c5d3c3fe1bb5573ccf10bc613dd7934b693b9f89ee7e15e72252e51"
	I1206 09:09:04.217094  279021 cri.go:89] found id: "4d2994a2506e6c11d937fddbee953cb9e445531b8bdfa82156bead1597a9b653"
	I1206 09:09:04.217100  279021 cri.go:89] found id: "5b346ece4dde8ffc932c1c91049372ecad8b5e37f5d2fe56f4b9f7dcf9478b04"
	I1206 09:09:04.217105  279021 cri.go:89] found id: ""
	I1206 09:09:04.217156  279021 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:09:04.228944  279021 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:04Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:04.229025  279021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:04.237212  279021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:09:04.237234  279021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:09:04.237280  279021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:09:04.244650  279021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:09:04.245356  279021 kubeconfig.go:125] found "kubernetes-upgrade-702638" server: "https://192.168.85.2:8443"
	I1206 09:09:04.246335  279021 kapi.go:59] client config for kubernetes-upgrade-702638: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:09:04.246741  279021 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 09:09:04.246755  279021 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 09:09:04.246760  279021 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 09:09:04.246767  279021 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 09:09:04.246778  279021 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 09:09:04.247139  279021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:09:04.254839  279021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 09:09:04.254869  279021 kubeadm.go:602] duration metric: took 17.629554ms to restartPrimaryControlPlane
	I1206 09:09:04.254878  279021 kubeadm.go:403] duration metric: took 73.910399ms to StartCluster
	I1206 09:09:04.254894  279021 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:04.254961  279021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:04.256199  279021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:04.256435  279021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:04.256501  279021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:04.256590  279021 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-702638"
	I1206 09:09:04.256614  279021 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-702638"
	W1206 09:09:04.256623  279021 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:09:04.256650  279021 host.go:66] Checking if "kubernetes-upgrade-702638" exists ...
	I1206 09:09:04.256670  279021 config.go:182] Loaded profile config "kubernetes-upgrade-702638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:04.256670  279021 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-702638"
	I1206 09:09:04.256734  279021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-702638"
	I1206 09:09:04.257045  279021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-702638 --format={{.State.Status}}
	I1206 09:09:04.257141  279021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-702638 --format={{.State.Status}}
	I1206 09:09:04.261587  279021 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:04.263245  279021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:04.280145  279021 kapi.go:59] client config for kubernetes-upgrade-702638: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:09:04.280426  279021 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-702638"
	W1206 09:09:04.280441  279021 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:09:04.280465  279021 host.go:66] Checking if "kubernetes-upgrade-702638" exists ...
	I1206 09:09:04.280788  279021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-702638 --format={{.State.Status}}
	I1206 09:09:04.280969  279021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:01.081923  278230 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-931091:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.224760051s)
	I1206 09:09:01.081967  278230 kic.go:203] duration metric: took 4.224913576s to extract preloaded images to volume ...
	W1206 09:09:01.082064  278230 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:09:01.082106  278230 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:09:01.082160  278230 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:09:01.143584  278230 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-931091 --name embed-certs-931091 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-931091 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-931091 --network embed-certs-931091 --ip 192.168.103.2 --volume embed-certs-931091:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:09:01.444471  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Running}}
	I1206 09:09:01.465468  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:09:01.488745  278230 cli_runner.go:164] Run: docker exec embed-certs-931091 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:09:01.536971  278230 oci.go:144] the created container "embed-certs-931091" has a running status.
	I1206 09:09:01.537123  278230 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa...
	I1206 09:09:01.730238  278230 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:09:01.771060  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:09:01.801319  278230 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:09:01.801333  278230 kic_runner.go:114] Args: [docker exec --privileged embed-certs-931091 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:09:01.880451  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:09:01.902748  278230 machine.go:94] provisionDockerMachine start ...
	I1206 09:09:01.902834  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:01.926809  278230 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:01.927160  278230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:09:01.927183  278230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:09:02.064963  278230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-931091
	
	I1206 09:09:02.065066  278230 ubuntu.go:182] provisioning hostname "embed-certs-931091"
	I1206 09:09:02.065155  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:02.088598  278230 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:02.088844  278230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:09:02.088861  278230 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-931091 && echo "embed-certs-931091" | sudo tee /etc/hostname
	I1206 09:09:02.242453  278230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-931091
	
	I1206 09:09:02.242549  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:02.265438  278230 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:02.265720  278230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:09:02.265750  278230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-931091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-931091/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-931091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:09:02.400376  278230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:09:02.400411  278230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:09:02.400446  278230 ubuntu.go:190] setting up certificates
	I1206 09:09:02.400457  278230 provision.go:84] configureAuth start
	I1206 09:09:02.400518  278230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931091
	I1206 09:09:02.426163  278230 provision.go:143] copyHostCerts
	I1206 09:09:02.426229  278230 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:09:02.426243  278230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:09:02.426321  278230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:09:02.426510  278230 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:09:02.426526  278230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:09:02.426573  278230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:09:02.426671  278230 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:09:02.426681  278230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:09:02.426718  278230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:09:02.426818  278230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.embed-certs-931091 san=[127.0.0.1 192.168.103.2 embed-certs-931091 localhost minikube]
	I1206 09:09:02.505391  278230 provision.go:177] copyRemoteCerts
	I1206 09:09:02.505454  278230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:09:02.505506  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:02.526174  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:02.625152  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:09:02.649749  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:09:02.670511  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:09:02.691635  278230 provision.go:87] duration metric: took 291.15926ms to configureAuth
	I1206 09:09:02.691664  278230 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:09:02.691851  278230 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:02.691956  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:02.712783  278230 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:02.713069  278230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:09:02.713095  278230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:09:03.010938  278230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:09:03.010976  278230 machine.go:97] duration metric: took 1.108209534s to provisionDockerMachine
	I1206 09:09:03.011000  278230 client.go:176] duration metric: took 6.747462231s to LocalClient.Create
	I1206 09:09:03.011026  278230 start.go:167] duration metric: took 6.747528878s to libmachine.API.Create "embed-certs-931091"
	I1206 09:09:03.011035  278230 start.go:293] postStartSetup for "embed-certs-931091" (driver="docker")
	I1206 09:09:03.011048  278230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:09:03.011117  278230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:09:03.011167  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:03.031485  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:03.135230  278230 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:09:03.139268  278230 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:09:03.139300  278230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:09:03.139313  278230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:09:03.139379  278230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:09:03.139489  278230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:09:03.139699  278230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:09:03.147557  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:03.171031  278230 start.go:296] duration metric: took 159.981398ms for postStartSetup
	I1206 09:09:03.171467  278230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931091
	I1206 09:09:03.191950  278230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/config.json ...
	I1206 09:09:03.192210  278230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:09:03.192250  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:03.211416  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:03.305181  278230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:09:03.310608  278230 start.go:128] duration metric: took 7.049024659s to createHost
	I1206 09:09:03.310634  278230 start.go:83] releasing machines lock for "embed-certs-931091", held for 7.049163982s
	I1206 09:09:03.310703  278230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931091
	I1206 09:09:03.329941  278230 ssh_runner.go:195] Run: cat /version.json
	I1206 09:09:03.330004  278230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:09:03.330028  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:03.330102  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:03.353325  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:03.353960  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:03.503552  278230 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:03.511669  278230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:09:03.549864  278230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:09:03.555176  278230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:09:03.555223  278230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:09:03.581807  278230 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:09:03.581830  278230 start.go:496] detecting cgroup driver to use...
	I1206 09:09:03.581862  278230 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:09:03.581913  278230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:09:03.598106  278230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:09:03.610943  278230 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:09:03.611007  278230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:09:03.628976  278230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:09:03.647954  278230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:09:03.736357  278230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:09:03.832815  278230 docker.go:234] disabling docker service ...
	I1206 09:09:03.832873  278230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:09:03.851826  278230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:09:03.864894  278230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:09:03.956430  278230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:09:04.040642  278230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:09:04.053891  278230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:09:04.069947  278230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:09:04.070034  278230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.080958  278230 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:09:04.081079  278230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.090311  278230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.099300  278230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.108807  278230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:09:04.117519  278230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.126763  278230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.141040  278230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:04.151193  278230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:09:04.159200  278230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:09:04.167070  278230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:04.255615  278230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:09:04.425496  278230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:09:04.425566  278230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:09:04.430276  278230 start.go:564] Will wait 60s for crictl version
	I1206 09:09:04.430342  278230 ssh_runner.go:195] Run: which crictl
	I1206 09:09:04.434936  278230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:09:04.467355  278230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:09:04.467428  278230 ssh_runner.go:195] Run: crio --version
	I1206 09:09:04.501350  278230 ssh_runner.go:195] Run: crio --version
	I1206 09:09:04.539564  278230 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:09:04.282131  279021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:04.282153  279021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:04.282195  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:09:04.304051  279021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:04.304074  279021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:04.304136  279021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-702638
	I1206 09:09:04.305092  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:09:04.331627  279021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kubernetes-upgrade-702638/id_rsa Username:docker}
	I1206 09:09:04.403165  279021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:04.417601  279021 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:09:04.417689  279021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:09:04.422044  279021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:04.431107  279021 api_server.go:72] duration metric: took 174.640798ms to wait for apiserver process to appear ...
	I1206 09:09:04.431139  279021 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:09:04.431178  279021 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 09:09:04.436142  279021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:04.437657  279021 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 09:09:04.445421  279021 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:09:04.445449  279021 api_server.go:131] duration metric: took 14.303306ms to wait for apiserver health ...
	I1206 09:09:04.445458  279021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:09:04.448922  279021 system_pods.go:59] 9 kube-system pods found
	I1206 09:09:04.448954  279021 system_pods.go:61] "coredns-7d764666f9-vhwxx" [cdf007dd-b6ce-4aec-80c2-c38015b17e35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:04.448961  279021 system_pods.go:61] "coredns-7d764666f9-x688s" [50d5e324-d41e-4a0a-bcb8-f80b40d37e27] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:04.448967  279021 system_pods.go:61] "etcd-kubernetes-upgrade-702638" [20aec520-79ac-4808-82cb-348317b48eb9] Running
	I1206 09:09:04.448971  279021 system_pods.go:61] "kindnet-l5ggh" [0fd15354-bdbd-4590-b0b1-36b8a007427a] Running
	I1206 09:09:04.448977  279021 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-702638" [a83b6a2f-36ab-452b-b2dc-66245e09e618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:09:04.448983  279021 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-702638" [e1fac785-9e02-4530-8fd6-db5dbde066b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:09:04.449020  279021 system_pods.go:61] "kube-proxy-rbzss" [44616403-53c9-4d3e-a046-65046b14d9c1] Running
	I1206 09:09:04.449029  279021 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-702638" [9e0d00ab-6d97-4285-95ec-064cd3340d3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:09:04.449036  279021 system_pods.go:61] "storage-provisioner" [ede2cd0d-68fa-4253-b25a-b7a44bffeb0d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:04.449045  279021 system_pods.go:74] duration metric: took 3.580735ms to wait for pod list to return data ...
	I1206 09:09:04.449057  279021 kubeadm.go:587] duration metric: took 192.593749ms to wait for: map[apiserver:true system_pods:true]
	I1206 09:09:04.449077  279021 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:09:04.451730  279021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:09:04.451756  279021 node_conditions.go:123] node cpu capacity is 8
	I1206 09:09:04.451772  279021 node_conditions.go:105] duration metric: took 2.689818ms to run NodePressure ...
	I1206 09:09:04.451785  279021 start.go:242] waiting for startup goroutines ...
	I1206 09:09:04.960734  279021 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:09:04.961914  279021 addons.go:530] duration metric: took 705.411996ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:09:04.961961  279021 start.go:247] waiting for cluster config update ...
	I1206 09:09:04.961976  279021 start.go:256] writing updated cluster config ...
	I1206 09:09:04.962252  279021 ssh_runner.go:195] Run: rm -f paused
	I1206 09:09:05.021033  279021 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:09:05.022623  279021 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-702638" cluster and "default" namespace by default
	I1206 09:09:04.540870  278230 cli_runner.go:164] Run: docker network inspect embed-certs-931091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:04.564929  278230 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:04.569828  278230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:04.581930  278230 kubeadm.go:884] updating cluster {Name:embed-certs-931091 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:04.582067  278230 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:04.582132  278230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:04.618153  278230 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:04.618178  278230 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:04.618229  278230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:04.646086  278230 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:04.646107  278230 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:04.646114  278230 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1206 09:09:04.646190  278230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-931091 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:04.646246  278230 ssh_runner.go:195] Run: crio config
	I1206 09:09:04.708181  278230 cni.go:84] Creating CNI manager for ""
	I1206 09:09:04.708209  278230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:04.708229  278230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:09:04.708259  278230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-931091 NodeName:embed-certs-931091 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:04.708413  278230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-931091"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:04.708488  278230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:09:04.718105  278230 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:04.718170  278230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:04.728911  278230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1206 09:09:04.742316  278230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:09:04.758607  278230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1206 09:09:04.773768  278230 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:04.777546  278230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:04.787841  278230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:04.883259  278230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:04.903775  278230 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091 for IP: 192.168.103.2
	I1206 09:09:04.903796  278230 certs.go:195] generating shared ca certs ...
	I1206 09:09:04.903815  278230 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:04.904023  278230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:04.904096  278230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:04.904109  278230 certs.go:257] generating profile certs ...
	I1206 09:09:04.904161  278230 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.key
	I1206 09:09:04.904174  278230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.crt with IP's: []
	I1206 09:09:05.223635  278230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.crt ...
	I1206 09:09:05.223664  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.crt: {Name:mk1415cffba9e86c93d1324a536d2d0f10e4fb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:05.223828  278230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.key ...
	I1206 09:09:05.223841  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.key: {Name:mkb0ebf65ef89a3b942373a81b89aa7e42e8c697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:05.223925  278230 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key.387b23fa
	I1206 09:09:05.223941  278230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt.387b23fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1206 09:09:05.431368  278230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt.387b23fa ...
	I1206 09:09:05.431394  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt.387b23fa: {Name:mke65a1576531ec2dae6eca0b84f76f9df64a4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:05.431550  278230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key.387b23fa ...
	I1206 09:09:05.431563  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key.387b23fa: {Name:mk0702a4a39772a2a05364bfb830d377cb84dbbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:05.431640  278230 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt.387b23fa -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt
	I1206 09:09:05.431708  278230 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key.387b23fa -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key
	I1206 09:09:05.431764  278230 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.key
	I1206 09:09:05.431780  278230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.crt with IP's: []
	I1206 09:09:05.544736  278230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.crt ...
	I1206 09:09:05.544768  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.crt: {Name:mk10e8ab323f7ef884fdf0fa09fca96e808a747b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:05.544958  278230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.key ...
	I1206 09:09:05.544976  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.key: {Name:mk3713d4bdebe83eea23acdcc57cae6a7b06f716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:05.545229  278230 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:05.545278  278230 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:05.545292  278230 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:05.545322  278230 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:05.545355  278230 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:05.545388  278230 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:05.545443  278230 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:05.546192  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:05.564360  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:05.582172  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:05.599394  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:05.616875  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1206 09:09:05.634244  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:09:05.651566  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:05.668735  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:09:05.686012  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:05.705915  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:05.723788  278230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:05.742125  278230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:05.754161  278230 ssh_runner.go:195] Run: openssl version
	I1206 09:09:05.760053  278230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:05.767310  278230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:05.774664  278230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:05.778284  278230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:05.778338  278230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:05.819914  278230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:05.827709  278230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:09:05.835517  278230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:05.845522  278230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:05.854877  278230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:05.858641  278230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:05.858701  278230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:05.893636  278230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:05.901485  278230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:09:05.909107  278230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:05.916528  278230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:05.924019  278230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:05.927896  278230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:05.927938  278230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:05.963605  278230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:05.972019  278230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:05.980875  278230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:05.984652  278230 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:09:05.984715  278230 kubeadm.go:401] StartCluster: {Name:embed-certs-931091 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:05.984786  278230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:05.984834  278230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:06.012345  278230 cri.go:89] found id: ""
	I1206 09:09:06.012416  278230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:06.021545  278230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:09:06.029383  278230 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:09:06.029444  278230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:09:06.037177  278230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:09:06.037194  278230 kubeadm.go:158] found existing configuration files:
	
	I1206 09:09:06.037239  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:09:06.044862  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:09:06.044923  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	
	
	==> CRI-O <==
	Dec 06 09:08:48 no-preload-769733 crio[569]: time="2025-12-06T09:08:48.931779671Z" level=info msg="Started container" PID=1688 containerID=a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper id=770ccc20-f8e4-4923-8718-6d1a5e8680ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bd04476d0ce1662b2592f5e9b4f6ceb630441b9059be7cedd7a5ed025f35d83
	Dec 06 09:08:49 no-preload-769733 crio[569]: time="2025-12-06T09:08:49.876028562Z" level=info msg="Removing container: 30c590b10c45fe4553ebdb8be8e251a94d23ec7adbd08dd6b08fa8782ae6546b" id=6a08fcab-be03-4a11-ac9f-65f5d611a3ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:08:50 no-preload-769733 crio[569]: time="2025-12-06T09:08:50.984738021Z" level=info msg="Removed container 30c590b10c45fe4553ebdb8be8e251a94d23ec7adbd08dd6b08fa8782ae6546b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=6a08fcab-be03-4a11-ac9f-65f5d611a3ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.022830859Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=e94639e8-957a-483f-81da-e7e9c342853c name=/runtime.v1.ImageService/PullImage
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.023546902Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=fb2d4544-7517-4def-b938-cfd5a78423de name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.025779037Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3947765e-2ee4-48d5-9ca0-60cebbec47ad name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.030611967Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8/kubernetes-dashboard" id=4172e999-542c-4010-86fe-002c9ab1b034 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.030739636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.0360444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.036345369Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5937f64a4fbec1c6d7a8e9e4191b5465a637f74544eb6fbc136edec54ec40f36/merged/etc/group: no such file or directory"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.036809067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.067348689Z" level=info msg="Created container 6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8/kubernetes-dashboard" id=4172e999-542c-4010-86fe-002c9ab1b034 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.068072341Z" level=info msg="Starting container: 6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d" id=07b3b6c4-7aef-4d92-8e09-9ffb4a4f465d name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.070558643Z" level=info msg="Started container" PID=1731 containerID=6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8/kubernetes-dashboard id=07b3b6c4-7aef-4d92-8e09-9ffb4a4f465d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a3d6f85ae4654487052656a6910622821a38cc67f5c51ee27eb7bd607c7ed52
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.800687966Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b29f44c1-1e9e-4240-81e7-f80bb65a4567 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.806744814Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=635a9872-28c3-4371-bcce-bc106feda30c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.809929985Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=1e4b99a6-2246-46fe-adce-2761bf86fba8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.810088622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.818743602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.819295946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.961712138Z" level=info msg="Created container 5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=1e4b99a6-2246-46fe-adce-2761bf86fba8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.963301878Z" level=info msg="Starting container: 5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd" id=28777edd-bf14-45de-bf1d-bbfd94ebb442 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.965795893Z" level=info msg="Started container" PID=1767 containerID=5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper id=28777edd-bf14-45de-bf1d-bbfd94ebb442 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bd04476d0ce1662b2592f5e9b4f6ceb630441b9059be7cedd7a5ed025f35d83
	Dec 06 09:09:01 no-preload-769733 crio[569]: time="2025-12-06T09:09:01.918562549Z" level=info msg="Removing container: a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d" id=ed3e4e50-965a-4752-b441-399b326872f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:09:01 no-preload-769733 crio[569]: time="2025-12-06T09:09:01.929833985Z" level=info msg="Removed container a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=ed3e4e50-965a-4752-b441-399b326872f6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5144e6e15dbfe       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   2                   3bd04476d0ce1       dashboard-metrics-scraper-867fb5f87b-gd7xs   kubernetes-dashboard
	6a489dd16d056       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   16 seconds ago      Running             kubernetes-dashboard        0                   6a3d6f85ae465       kubernetes-dashboard-b84665fb8-nz2h8         kubernetes-dashboard
	3c32925f8ca5e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           28 seconds ago      Running             coredns                     0                   399fd681077f7       coredns-7d764666f9-jllj2                     kube-system
	acb0b0f53951a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           28 seconds ago      Running             busybox                     1                   791c2baf7a465       busybox                                      default
	13a307ecfdb80       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           31 seconds ago      Running             kube-proxy                  0                   1406844a6aee4       kube-proxy-5jsq2                             kube-system
	60efef5c2c46c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           31 seconds ago      Exited              storage-provisioner         0                   f21043e641148       storage-provisioner                          kube-system
	ec86b1cf9e163       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           31 seconds ago      Running             kindnet-cni                 0                   53b627ee1a5da       kindnet-7m8h6                                kube-system
	d0afbad498b73       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           34 seconds ago      Running             etcd                        0                   6417f5c956ee4       etcd-no-preload-769733                       kube-system
	64e6d8d09b82d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           34 seconds ago      Running             kube-apiserver              0                   7a2d7b028bb2e       kube-apiserver-no-preload-769733             kube-system
	ddbc73ad7fa5d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           34 seconds ago      Running             kube-controller-manager     0                   748e3118e9f91       kube-controller-manager-no-preload-769733    kube-system
	2822e91cc3bdb       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           34 seconds ago      Running             kube-scheduler              0                   3b3fe1533d66f       kube-scheduler-no-preload-769733             kube-system
	
	
	==> coredns [3c32925f8ca5ebb1fe444e961d3933b6ab892cbdaa78228462c024661935c5e9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49913 - 34303 "HINFO IN 6491382940492496492.2672897898509078328. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.419604277s
	
	
	==> describe nodes <==
	Name:               no-preload-769733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-769733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=no-preload-769733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_07_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:07:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-769733
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:08:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-769733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                102fff6b-fbfc-491f-a5fe-409060b67cce
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 coredns-7d764666f9-jllj2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     87s
	  kube-system                 etcd-no-preload-769733                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         92s
	  kube-system                 kindnet-7m8h6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-no-preload-769733              250m (3%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-no-preload-769733     200m (2%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-5jsq2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-no-preload-769733              100m (1%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-gd7xs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nz2h8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  88s   node-controller  Node no-preload-769733 event: Registered Node no-preload-769733 in Controller
	  Normal  RegisteredNode  30s   node-controller  Node no-preload-769733 event: Registered Node no-preload-769733 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [d0afbad498b737f9a326bbd880b916eec7f277a7a2d1bbc94bf74de4be59afa5] <==
	{"level":"warn","ts":"2025-12-06T09:08:34.185560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.192733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.203598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.210761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.217765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.224593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.231979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.238145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.245807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.252310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.259142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.265782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.272240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.278853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.292938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.301611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.310353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.318938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.326471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.348219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.351909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.358682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.365340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.371827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.428507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57228","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:08 up 51 min,  0 user,  load average: 2.92, 2.27, 1.70
	Linux no-preload-769733 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec86b1cf9e1633d8e898b3b09622c6c417c612e379a6bd67b0c3a5ffbb3629f8] <==
	I1206 09:08:36.395543       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:08:36.395825       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:08:36.395948       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:08:36.395963       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:08:36.396015       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:08:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:08:36.690897       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:08:36.692141       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:08:36.692212       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:08:36.692378       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:08:36.892796       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:08:36.892821       1 metrics.go:72] Registering metrics
	I1206 09:08:36.892866       1 controller.go:711] "Syncing nftables rules"
	I1206 09:08:46.596069       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:08:46.596125       1 main.go:301] handling current node
	I1206 09:08:56.598486       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:08:56.598526       1 main.go:301] handling current node
	I1206 09:09:06.599291       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:09:06.599323       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64e6d8d09b82ddda9e75d07401cf42a63194cca950c2d205b309768e43ff4df9] <==
	I1206 09:08:35.057720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:08:35.057743       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:08:35.059553       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:08:35.059641       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:35.059691       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:08:35.059965       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:08:35.059999       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:08:35.060185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1206 09:08:35.067964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:08:35.097818       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:08:35.106156       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:35.106818       1 policy_source.go:248] refreshing policies
	I1206 09:08:35.190837       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:08:35.431062       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:08:35.461608       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:08:35.482540       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:08:35.492437       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:08:35.499862       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:08:35.536314       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.172.219"}
	I1206 09:08:35.547038       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.6.134"}
	I1206 09:08:35.943854       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:08:38.710567       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:08:38.761314       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:08:38.864356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:08:39.011527       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ddbc73ad7fa5d9113b7d32a21294ac2f252d10dea2e4d287294a4318592979c5] <==
	I1206 09:08:38.215214       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215214       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215229       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215231       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215239       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.217791       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:08:38.217871       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-769733"
	I1206 09:08:38.217913       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:08:38.215254       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.218019       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:08:38.218069       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:08:38.218076       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:38.218083       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215246       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215240       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215249       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215257       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.221963       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:38.238660       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.315577       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.315598       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:08:38.315604       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:08:38.323171       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.868176       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1206 09:08:48.220275       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [13a307ecfdb80882ff1f7a7ff010df7e0ecb383b512e2d747867e48a8c753e32] <==
	I1206 09:08:36.190399       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:08:36.272295       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:36.372846       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:36.372875       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:08:36.372937       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:08:36.394292       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:08:36.394343       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:08:36.400984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:08:36.401425       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:08:36.401445       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:36.403125       1 config.go:200] "Starting service config controller"
	I1206 09:08:36.403163       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:08:36.403205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:08:36.403213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:08:36.403164       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:08:36.403329       1 config.go:309] "Starting node config controller"
	I1206 09:08:36.403340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:08:36.403347       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:08:36.403332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:08:36.503427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:08:36.503446       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:08:36.503465       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2822e91cc3bdbcb4954732f408d6dd842598613c1df6081098b5a50c9e302aa6] <==
	I1206 09:08:33.690933       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:08:34.970776       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:08:34.970828       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:08:34.970840       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:08:34.970849       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:08:35.030265       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:08:35.030359       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:35.037464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:08:35.037571       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:35.037949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:08:35.039739       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:08:35.138458       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:08:49 no-preload-769733 kubelet[720]: E1206 09:08:49.874516     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:08:49 no-preload-769733 kubelet[720]: I1206 09:08:49.874546     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:08:49 no-preload-769733 kubelet[720]: E1206 09:08:49.874743     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:08:50 no-preload-769733 kubelet[720]: E1206 09:08:50.876912     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:08:50 no-preload-769733 kubelet[720]: I1206 09:08:50.876959     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:08:50 no-preload-769733 kubelet[720]: E1206 09:08:50.877195     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:08:51 no-preload-769733 kubelet[720]: E1206 09:08:51.882550     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8" containerName="kubernetes-dashboard"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.390477     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-769733" containerName="kube-controller-manager"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: I1206 09:08:52.403180     720 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8" podStartSLOduration=8.94027005 podStartE2EDuration="14.40313988s" podCreationTimestamp="2025-12-06 09:08:38 +0000 UTC" firstStartedPulling="2025-12-06 09:08:45.561951335 +0000 UTC m=+12.847704990" lastFinishedPulling="2025-12-06 09:08:51.024821151 +0000 UTC m=+18.310574820" observedRunningTime="2025-12-06 09:08:51.896208545 +0000 UTC m=+19.181962221" watchObservedRunningTime="2025-12-06 09:08:52.40313988 +0000 UTC m=+19.688893535"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.458835     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-769733" containerName="kube-apiserver"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.889026     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-769733" containerName="kube-apiserver"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.889338     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8" containerName="kubernetes-dashboard"
	Dec 06 09:08:55 no-preload-769733 kubelet[720]: E1206 09:08:55.532011     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:08:55 no-preload-769733 kubelet[720]: I1206 09:08:55.532057     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:08:55 no-preload-769733 kubelet[720]: E1206 09:08:55.532234     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:09:00 no-preload-769733 kubelet[720]: E1206 09:09:00.800153     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:09:00 no-preload-769733 kubelet[720]: I1206 09:09:00.800201     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: I1206 09:09:01.916331     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: E1206 09:09:01.916614     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: I1206 09:09:01.916823     720 scope.go:122] "RemoveContainer" containerID="5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: E1206 09:09:01.917056     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:09:05 no-preload-769733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:09:05 no-preload-769733 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:09:05 no-preload-769733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:05 no-preload-769733 systemd[1]: kubelet.service: Consumed 1.256s CPU time.
	
	
	==> kubernetes-dashboard [6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d] <==
	2025/12/06 09:08:51 Using namespace: kubernetes-dashboard
	2025/12/06 09:08:51 Using in-cluster config to connect to apiserver
	2025/12/06 09:08:51 Using secret token for csrf signing
	2025/12/06 09:08:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:08:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:08:51 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/06 09:08:51 Generating JWE encryption key
	2025/12/06 09:08:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:08:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:08:51 Initializing JWE encryption key from synchronized object
	2025/12/06 09:08:51 Creating in-cluster Sidecar client
	2025/12/06 09:08:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:08:51 Serving insecurely on HTTP port: 9090
	2025/12/06 09:08:51 Starting overwatch
	
	
	==> storage-provisioner [60efef5c2c46ca4f5242aa414ab8357c06d3bd4857bb484da8a7a38eef2ce888] <==
	I1206 09:08:36.163627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:09:06.166039       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-769733 -n no-preload-769733
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-769733 -n no-preload-769733: exit status 2 (389.262561ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-769733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-769733
helpers_test.go:243: (dbg) docker inspect no-preload-769733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01",
	        "Created": "2025-12-06T09:07:11.630466318Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:08:26.794910158Z",
	            "FinishedAt": "2025-12-06T09:08:25.895659891Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/hosts",
	        "LogPath": "/var/lib/docker/containers/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01/2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01-json.log",
	        "Name": "/no-preload-769733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-769733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-769733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b0a9b7f20f182141ef6bb06a624ef9dccb8e4f1d81b7f78aeef182dd9eafb01",
	                "LowerDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b5c990647f6efbf89749b96de59b21683b085a04e5783741dbc72eb51a00f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-769733",
	                "Source": "/var/lib/docker/volumes/no-preload-769733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-769733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-769733",
	                "name.minikube.sigs.k8s.io": "no-preload-769733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "89e36bb5782d56a7227763d37b25445fea444c0cebfce8e76c37e438916eaf35",
	            "SandboxKey": "/var/run/docker/netns/89e36bb5782d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-769733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d145c702ed0f08c782b680a462b6b5a0d8a60b36a26fd7d3512cd90419c2ab9",
	                    "EndpointID": "a50f203da5d1d47288a30f672244bdd9aa55ab8fde1ad41310b86f6bf9aa6330",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ee:90:21:7d:0b:2c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-769733",
	                        "2b0a9b7f20f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733: exit status 2 (386.059367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-769733 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-769733 logs -n 25: (1.206417311s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ stop    │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p NoKubernetes-328079 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ -p NoKubernetes-328079 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ delete  │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-322324 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ stop    │ -p no-preload-769733 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-322324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                     │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                    │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                  │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                               │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                               │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:07.781061  282948 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:07.781355  282948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:07.781366  282948 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:07.781372  282948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:07.781595  282948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:07.782109  282948 out.go:368] Setting JSON to false
	I1206 09:09:07.783505  282948 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3099,"bootTime":1765009049,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:07.783588  282948 start.go:143] virtualization: kvm guest
	I1206 09:09:07.785593  282948 out.go:179] * [default-k8s-diff-port-213278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:07.787310  282948 notify.go:221] Checking for updates...
	I1206 09:09:07.787346  282948 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:07.788797  282948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:07.789998  282948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:07.791256  282948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:07.792967  282948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:07.795295  282948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:07.797267  282948 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:07.797404  282948 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:07.797535  282948 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:09:07.797659  282948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:07.824410  282948 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:07.824537  282948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:07.886126  282948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:07.875601828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:07.886233  282948 docker.go:319] overlay module found
	I1206 09:09:07.887931  282948 out.go:179] * Using the docker driver based on user configuration
	I1206 09:09:07.889291  282948 start.go:309] selected driver: docker
	I1206 09:09:07.889310  282948 start.go:927] validating driver "docker" against <nil>
	I1206 09:09:07.889323  282948 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:07.889912  282948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:07.950516  282948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:07.940060335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:07.950743  282948 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:09:07.951033  282948 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:09:07.957079  282948 out.go:179] * Using Docker driver with root privileges
	I1206 09:09:07.959247  282948 cni.go:84] Creating CNI manager for ""
	I1206 09:09:07.959341  282948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:07.959362  282948 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:09:07.959461  282948 start.go:353] cluster config:
	{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:07.961050  282948 out.go:179] * Starting "default-k8s-diff-port-213278" primary control-plane node in "default-k8s-diff-port-213278" cluster
	I1206 09:09:07.962278  282948 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:07.963476  282948 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:07.964504  282948 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:07.964544  282948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:07.964566  282948 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:07.964600  282948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:07.964653  282948 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:07.964661  282948 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:09:07.964736  282948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:09:07.964750  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json: {Name:mk749be6f3b06ee84322203f3d8663effbbdb2b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:07.990589  282948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:07.990618  282948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:07.990636  282948 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:07.990679  282948 start.go:360] acquireMachinesLock for default-k8s-diff-port-213278: {Name:mk866228eff8eb9f8cbf106e77f0dc837aabddf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:07.990811  282948 start.go:364] duration metric: took 107.653µs to acquireMachinesLock for "default-k8s-diff-port-213278"
	I1206 09:09:07.990849  282948 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:07.990932  282948 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 06 09:08:48 no-preload-769733 crio[569]: time="2025-12-06T09:08:48.931779671Z" level=info msg="Started container" PID=1688 containerID=a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper id=770ccc20-f8e4-4923-8718-6d1a5e8680ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bd04476d0ce1662b2592f5e9b4f6ceb630441b9059be7cedd7a5ed025f35d83
	Dec 06 09:08:49 no-preload-769733 crio[569]: time="2025-12-06T09:08:49.876028562Z" level=info msg="Removing container: 30c590b10c45fe4553ebdb8be8e251a94d23ec7adbd08dd6b08fa8782ae6546b" id=6a08fcab-be03-4a11-ac9f-65f5d611a3ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:08:50 no-preload-769733 crio[569]: time="2025-12-06T09:08:50.984738021Z" level=info msg="Removed container 30c590b10c45fe4553ebdb8be8e251a94d23ec7adbd08dd6b08fa8782ae6546b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=6a08fcab-be03-4a11-ac9f-65f5d611a3ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.022830859Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=e94639e8-957a-483f-81da-e7e9c342853c name=/runtime.v1.ImageService/PullImage
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.023546902Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=fb2d4544-7517-4def-b938-cfd5a78423de name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.025779037Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3947765e-2ee4-48d5-9ca0-60cebbec47ad name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.030611967Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8/kubernetes-dashboard" id=4172e999-542c-4010-86fe-002c9ab1b034 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.030739636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.0360444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.036345369Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5937f64a4fbec1c6d7a8e9e4191b5465a637f74544eb6fbc136edec54ec40f36/merged/etc/group: no such file or directory"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.036809067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.067348689Z" level=info msg="Created container 6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8/kubernetes-dashboard" id=4172e999-542c-4010-86fe-002c9ab1b034 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.068072341Z" level=info msg="Starting container: 6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d" id=07b3b6c4-7aef-4d92-8e09-9ffb4a4f465d name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:51 no-preload-769733 crio[569]: time="2025-12-06T09:08:51.070558643Z" level=info msg="Started container" PID=1731 containerID=6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8/kubernetes-dashboard id=07b3b6c4-7aef-4d92-8e09-9ffb4a4f465d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a3d6f85ae4654487052656a6910622821a38cc67f5c51ee27eb7bd607c7ed52
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.800687966Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b29f44c1-1e9e-4240-81e7-f80bb65a4567 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.806744814Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=635a9872-28c3-4371-bcce-bc106feda30c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.809929985Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=1e4b99a6-2246-46fe-adce-2761bf86fba8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.810088622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.818743602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.819295946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.961712138Z" level=info msg="Created container 5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=1e4b99a6-2246-46fe-adce-2761bf86fba8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.963301878Z" level=info msg="Starting container: 5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd" id=28777edd-bf14-45de-bf1d-bbfd94ebb442 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:00 no-preload-769733 crio[569]: time="2025-12-06T09:09:00.965795893Z" level=info msg="Started container" PID=1767 containerID=5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper id=28777edd-bf14-45de-bf1d-bbfd94ebb442 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bd04476d0ce1662b2592f5e9b4f6ceb630441b9059be7cedd7a5ed025f35d83
	Dec 06 09:09:01 no-preload-769733 crio[569]: time="2025-12-06T09:09:01.918562549Z" level=info msg="Removing container: a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d" id=ed3e4e50-965a-4752-b441-399b326872f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:09:01 no-preload-769733 crio[569]: time="2025-12-06T09:09:01.929833985Z" level=info msg="Removed container a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs/dashboard-metrics-scraper" id=ed3e4e50-965a-4752-b441-399b326872f6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5144e6e15dbfe       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   2                   3bd04476d0ce1       dashboard-metrics-scraper-867fb5f87b-gd7xs   kubernetes-dashboard
	6a489dd16d056       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   18 seconds ago      Running             kubernetes-dashboard        0                   6a3d6f85ae465       kubernetes-dashboard-b84665fb8-nz2h8         kubernetes-dashboard
	3c32925f8ca5e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           30 seconds ago      Running             coredns                     0                   399fd681077f7       coredns-7d764666f9-jllj2                     kube-system
	acb0b0f53951a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           30 seconds ago      Running             busybox                     1                   791c2baf7a465       busybox                                      default
	13a307ecfdb80       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           33 seconds ago      Running             kube-proxy                  0                   1406844a6aee4       kube-proxy-5jsq2                             kube-system
	60efef5c2c46c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           33 seconds ago      Exited              storage-provisioner         0                   f21043e641148       storage-provisioner                          kube-system
	ec86b1cf9e163       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           33 seconds ago      Running             kindnet-cni                 0                   53b627ee1a5da       kindnet-7m8h6                                kube-system
	d0afbad498b73       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           36 seconds ago      Running             etcd                        0                   6417f5c956ee4       etcd-no-preload-769733                       kube-system
	64e6d8d09b82d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           36 seconds ago      Running             kube-apiserver              0                   7a2d7b028bb2e       kube-apiserver-no-preload-769733             kube-system
	ddbc73ad7fa5d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           36 seconds ago      Running             kube-controller-manager     0                   748e3118e9f91       kube-controller-manager-no-preload-769733    kube-system
	2822e91cc3bdb       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           36 seconds ago      Running             kube-scheduler              0                   3b3fe1533d66f       kube-scheduler-no-preload-769733             kube-system
	
	
	==> coredns [3c32925f8ca5ebb1fe444e961d3933b6ab892cbdaa78228462c024661935c5e9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49913 - 34303 "HINFO IN 6491382940492496492.2672897898509078328. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.419604277s
	
	
	==> describe nodes <==
	Name:               no-preload-769733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-769733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=no-preload-769733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_07_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:07:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-769733
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:08:45 +0000   Sat, 06 Dec 2025 09:08:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-769733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                102fff6b-fbfc-491f-a5fe-409060b67cce
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 coredns-7d764666f9-jllj2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 etcd-no-preload-769733                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         94s
	  kube-system                 kindnet-7m8h6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-no-preload-769733              250m (3%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-no-preload-769733     200m (2%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-5jsq2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-no-preload-769733              100m (1%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-gd7xs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nz2h8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  90s   node-controller  Node no-preload-769733 event: Registered Node no-preload-769733 in Controller
	  Normal  RegisteredNode  32s   node-controller  Node no-preload-769733 event: Registered Node no-preload-769733 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [d0afbad498b737f9a326bbd880b916eec7f277a7a2d1bbc94bf74de4be59afa5] <==
	{"level":"warn","ts":"2025-12-06T09:08:34.185560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.192733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.203598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.210761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.217765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.224593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.231979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.238145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.245807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.252310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.259142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.265782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.272240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.278853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.292938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.301611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.310353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.318938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.326471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.348219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.351909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.358682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.365340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.371827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:08:34.428507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57228","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:10 up 51 min,  0 user,  load average: 2.92, 2.27, 1.70
	Linux no-preload-769733 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec86b1cf9e1633d8e898b3b09622c6c417c612e379a6bd67b0c3a5ffbb3629f8] <==
	I1206 09:08:36.395543       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:08:36.395825       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:08:36.395948       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:08:36.395963       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:08:36.396015       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:08:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:08:36.690897       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:08:36.692141       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:08:36.692212       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:08:36.692378       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:08:36.892796       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:08:36.892821       1 metrics.go:72] Registering metrics
	I1206 09:08:36.892866       1 controller.go:711] "Syncing nftables rules"
	I1206 09:08:46.596069       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:08:46.596125       1 main.go:301] handling current node
	I1206 09:08:56.598486       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:08:56.598526       1 main.go:301] handling current node
	I1206 09:09:06.599291       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1206 09:09:06.599323       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64e6d8d09b82ddda9e75d07401cf42a63194cca950c2d205b309768e43ff4df9] <==
	I1206 09:08:35.057720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:08:35.057743       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:08:35.059553       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:08:35.059641       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:35.059691       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:08:35.059965       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:08:35.059999       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:08:35.060185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1206 09:08:35.067964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:08:35.097818       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:08:35.106156       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:35.106818       1 policy_source.go:248] refreshing policies
	I1206 09:08:35.190837       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:08:35.431062       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:08:35.461608       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:08:35.482540       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:08:35.492437       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:08:35.499862       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:08:35.536314       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.172.219"}
	I1206 09:08:35.547038       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.6.134"}
	I1206 09:08:35.943854       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:08:38.710567       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:08:38.761314       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:08:38.864356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:08:39.011527       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ddbc73ad7fa5d9113b7d32a21294ac2f252d10dea2e4d287294a4318592979c5] <==
	I1206 09:08:38.215214       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215214       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215229       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215231       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215239       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.217791       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:08:38.217871       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-769733"
	I1206 09:08:38.217913       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:08:38.215254       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.218019       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:08:38.218069       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:08:38.218076       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:38.218083       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215246       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215240       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215249       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.215257       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.221963       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:38.238660       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.315577       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.315598       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:08:38.315604       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:08:38.323171       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:38.868176       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1206 09:08:48.220275       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [13a307ecfdb80882ff1f7a7ff010df7e0ecb383b512e2d747867e48a8c753e32] <==
	I1206 09:08:36.190399       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:08:36.272295       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:36.372846       1 shared_informer.go:377] "Caches are synced"
	I1206 09:08:36.372875       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:08:36.372937       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:08:36.394292       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:08:36.394343       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:08:36.400984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:08:36.401425       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:08:36.401445       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:36.403125       1 config.go:200] "Starting service config controller"
	I1206 09:08:36.403163       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:08:36.403205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:08:36.403213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:08:36.403164       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:08:36.403329       1 config.go:309] "Starting node config controller"
	I1206 09:08:36.403340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:08:36.403347       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:08:36.403332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:08:36.503427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:08:36.503446       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:08:36.503465       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2822e91cc3bdbcb4954732f408d6dd842598613c1df6081098b5a50c9e302aa6] <==
	I1206 09:08:33.690933       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:08:34.970776       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:08:34.970828       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:08:34.970840       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:08:34.970849       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:08:35.030265       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:08:35.030359       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:35.037464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:08:35.037571       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:08:35.037949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:08:35.039739       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:08:35.138458       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:08:49 no-preload-769733 kubelet[720]: E1206 09:08:49.874516     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:08:49 no-preload-769733 kubelet[720]: I1206 09:08:49.874546     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:08:49 no-preload-769733 kubelet[720]: E1206 09:08:49.874743     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:08:50 no-preload-769733 kubelet[720]: E1206 09:08:50.876912     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:08:50 no-preload-769733 kubelet[720]: I1206 09:08:50.876959     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:08:50 no-preload-769733 kubelet[720]: E1206 09:08:50.877195     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:08:51 no-preload-769733 kubelet[720]: E1206 09:08:51.882550     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8" containerName="kubernetes-dashboard"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.390477     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-769733" containerName="kube-controller-manager"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: I1206 09:08:52.403180     720 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8" podStartSLOduration=8.94027005 podStartE2EDuration="14.40313988s" podCreationTimestamp="2025-12-06 09:08:38 +0000 UTC" firstStartedPulling="2025-12-06 09:08:45.561951335 +0000 UTC m=+12.847704990" lastFinishedPulling="2025-12-06 09:08:51.024821151 +0000 UTC m=+18.310574820" observedRunningTime="2025-12-06 09:08:51.896208545 +0000 UTC m=+19.181962221" watchObservedRunningTime="2025-12-06 09:08:52.40313988 +0000 UTC m=+19.688893535"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.458835     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-769733" containerName="kube-apiserver"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.889026     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-769733" containerName="kube-apiserver"
	Dec 06 09:08:52 no-preload-769733 kubelet[720]: E1206 09:08:52.889338     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nz2h8" containerName="kubernetes-dashboard"
	Dec 06 09:08:55 no-preload-769733 kubelet[720]: E1206 09:08:55.532011     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:08:55 no-preload-769733 kubelet[720]: I1206 09:08:55.532057     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:08:55 no-preload-769733 kubelet[720]: E1206 09:08:55.532234     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:09:00 no-preload-769733 kubelet[720]: E1206 09:09:00.800153     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:09:00 no-preload-769733 kubelet[720]: I1206 09:09:00.800201     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: I1206 09:09:01.916331     720 scope.go:122] "RemoveContainer" containerID="a2e5e29ed786345e65fec76f021174a112574a917ca53233fe1b88a803f5594d"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: E1206 09:09:01.916614     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" containerName="dashboard-metrics-scraper"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: I1206 09:09:01.916823     720 scope.go:122] "RemoveContainer" containerID="5144e6e15dbfec0ea5d5d13f821575c9ee28938f5a43760e854f44eaf95a7afd"
	Dec 06 09:09:01 no-preload-769733 kubelet[720]: E1206 09:09:01.917056     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gd7xs_kubernetes-dashboard(a46b17c9-9497-48ab-a831-f1ae7565b49a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gd7xs" podUID="a46b17c9-9497-48ab-a831-f1ae7565b49a"
	Dec 06 09:09:05 no-preload-769733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:09:05 no-preload-769733 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:09:05 no-preload-769733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:05 no-preload-769733 systemd[1]: kubelet.service: Consumed 1.256s CPU time.
	
	
	==> kubernetes-dashboard [6a489dd16d0567a99b3d67df3f517224e0a1c0e689a5b14328d20c2e3a113f0d] <==
	2025/12/06 09:08:51 Using namespace: kubernetes-dashboard
	2025/12/06 09:08:51 Using in-cluster config to connect to apiserver
	2025/12/06 09:08:51 Using secret token for csrf signing
	2025/12/06 09:08:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:08:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:08:51 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/06 09:08:51 Generating JWE encryption key
	2025/12/06 09:08:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:08:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:08:51 Initializing JWE encryption key from synchronized object
	2025/12/06 09:08:51 Creating in-cluster Sidecar client
	2025/12/06 09:08:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:08:51 Serving insecurely on HTTP port: 9090
	2025/12/06 09:08:51 Starting overwatch
	
	
	==> storage-provisioner [60efef5c2c46ca4f5242aa414ab8357c06d3bd4857bb484da8a7a38eef2ce888] <==
	I1206 09:08:36.163627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:09:06.166039       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-769733 -n no-preload-769733
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-769733 -n no-preload-769733: exit status 2 (357.939119ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-769733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-322324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-322324 --alsologtostderr -v=1: exit status 80 (2.042519092s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-322324 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:09:08.951263  283725 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:08.951375  283725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:08.951394  283725 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:08.951400  283725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:08.951646  283725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:08.951918  283725 out.go:368] Setting JSON to false
	I1206 09:09:08.951939  283725 mustload.go:66] Loading cluster: old-k8s-version-322324
	I1206 09:09:08.952332  283725 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:09:08.952743  283725 cli_runner.go:164] Run: docker container inspect old-k8s-version-322324 --format={{.State.Status}}
	I1206 09:09:08.974361  283725 host.go:66] Checking if "old-k8s-version-322324" exists ...
	I1206 09:09:08.974667  283725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:09.048101  283725 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-06 09:09:09.036815773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:09.049088  283725 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-322324 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:09:09.051302  283725 out.go:179] * Pausing node old-k8s-version-322324 ... 
	I1206 09:09:09.052549  283725 host.go:66] Checking if "old-k8s-version-322324" exists ...
	I1206 09:09:09.052877  283725 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:09.052927  283725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-322324
	I1206 09:09:09.075569  283725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/old-k8s-version-322324/id_rsa Username:docker}
	I1206 09:09:09.169934  283725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:09.185614  283725 pause.go:52] kubelet running: true
	I1206 09:09:09.185717  283725 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:09.397070  283725 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:09.397156  283725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:09.493374  283725 cri.go:89] found id: "bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7"
	I1206 09:09:09.493398  283725 cri.go:89] found id: "e9e02ff4d6e79c95e30cc7a811aafbfa34d658f1561ea8f3b6c2269ad2e1397b"
	I1206 09:09:09.493404  283725 cri.go:89] found id: "cb6d5b477d48c1e0e8c2ebe41e2734e7a810d702cdea465f3a9dc52ad2db62bc"
	I1206 09:09:09.493409  283725 cri.go:89] found id: "f19651bfbff8fcbd9e841cc22d3284b680b67b091a55ceb311089f61ad655413"
	I1206 09:09:09.493414  283725 cri.go:89] found id: "a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f"
	I1206 09:09:09.493418  283725 cri.go:89] found id: "3d02391cf9525e0f4953634ac19b398b1652701a94dddd15a44811b30512ea3a"
	I1206 09:09:09.493423  283725 cri.go:89] found id: "449ca9b6a1f1ebf09546774acd9a2b46ec5e9f317ca3c7946c4a21ff592aa0a1"
	I1206 09:09:09.493427  283725 cri.go:89] found id: "cd24c2425e3bca0058ef42378cc74157a256df97dddd9fdcb2d5c40e9fe7acd1"
	I1206 09:09:09.493431  283725 cri.go:89] found id: "2b2dc64bd75ea60632c5aec9a4334196e82d82d7f050472813594206b55966f8"
	I1206 09:09:09.493439  283725 cri.go:89] found id: "2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	I1206 09:09:09.493443  283725 cri.go:89] found id: "e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344"
	I1206 09:09:09.493447  283725 cri.go:89] found id: ""
	I1206 09:09:09.493497  283725 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:09.507725  283725 retry.go:31] will retry after 343.322089ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:09Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:09.852168  283725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:09.867830  283725 pause.go:52] kubelet running: false
	I1206 09:09:09.867904  283725 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:10.061821  283725 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:10.061931  283725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:10.139615  283725 cri.go:89] found id: "bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7"
	I1206 09:09:10.139641  283725 cri.go:89] found id: "e9e02ff4d6e79c95e30cc7a811aafbfa34d658f1561ea8f3b6c2269ad2e1397b"
	I1206 09:09:10.139648  283725 cri.go:89] found id: "cb6d5b477d48c1e0e8c2ebe41e2734e7a810d702cdea465f3a9dc52ad2db62bc"
	I1206 09:09:10.139653  283725 cri.go:89] found id: "f19651bfbff8fcbd9e841cc22d3284b680b67b091a55ceb311089f61ad655413"
	I1206 09:09:10.139658  283725 cri.go:89] found id: "a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f"
	I1206 09:09:10.139663  283725 cri.go:89] found id: "3d02391cf9525e0f4953634ac19b398b1652701a94dddd15a44811b30512ea3a"
	I1206 09:09:10.139668  283725 cri.go:89] found id: "449ca9b6a1f1ebf09546774acd9a2b46ec5e9f317ca3c7946c4a21ff592aa0a1"
	I1206 09:09:10.139673  283725 cri.go:89] found id: "cd24c2425e3bca0058ef42378cc74157a256df97dddd9fdcb2d5c40e9fe7acd1"
	I1206 09:09:10.139678  283725 cri.go:89] found id: "2b2dc64bd75ea60632c5aec9a4334196e82d82d7f050472813594206b55966f8"
	I1206 09:09:10.139697  283725 cri.go:89] found id: "2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	I1206 09:09:10.139705  283725 cri.go:89] found id: "e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344"
	I1206 09:09:10.139709  283725 cri.go:89] found id: ""
	I1206 09:09:10.139756  283725 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:10.153178  283725 retry.go:31] will retry after 458.924225ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:10Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:10.612324  283725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:10.627952  283725 pause.go:52] kubelet running: false
	I1206 09:09:10.628050  283725 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:10.800983  283725 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:10.801100  283725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:10.880034  283725 cri.go:89] found id: "bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7"
	I1206 09:09:10.880058  283725 cri.go:89] found id: "e9e02ff4d6e79c95e30cc7a811aafbfa34d658f1561ea8f3b6c2269ad2e1397b"
	I1206 09:09:10.880072  283725 cri.go:89] found id: "cb6d5b477d48c1e0e8c2ebe41e2734e7a810d702cdea465f3a9dc52ad2db62bc"
	I1206 09:09:10.880078  283725 cri.go:89] found id: "f19651bfbff8fcbd9e841cc22d3284b680b67b091a55ceb311089f61ad655413"
	I1206 09:09:10.880096  283725 cri.go:89] found id: "a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f"
	I1206 09:09:10.880103  283725 cri.go:89] found id: "3d02391cf9525e0f4953634ac19b398b1652701a94dddd15a44811b30512ea3a"
	I1206 09:09:10.880107  283725 cri.go:89] found id: "449ca9b6a1f1ebf09546774acd9a2b46ec5e9f317ca3c7946c4a21ff592aa0a1"
	I1206 09:09:10.880111  283725 cri.go:89] found id: "cd24c2425e3bca0058ef42378cc74157a256df97dddd9fdcb2d5c40e9fe7acd1"
	I1206 09:09:10.880117  283725 cri.go:89] found id: "2b2dc64bd75ea60632c5aec9a4334196e82d82d7f050472813594206b55966f8"
	I1206 09:09:10.880132  283725 cri.go:89] found id: "2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	I1206 09:09:10.880141  283725 cri.go:89] found id: "e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344"
	I1206 09:09:10.880145  283725 cri.go:89] found id: ""
	I1206 09:09:10.880194  283725 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:10.902018  283725 out.go:203] 
	W1206 09:09:10.904634  283725 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:09:10.904669  283725 out.go:285] * 
	* 
	W1206 09:09:10.908803  283725 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:09:10.910242  283725 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-322324 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-322324
helpers_test.go:243: (dbg) docker inspect old-k8s-version-322324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f",
	        "Created": "2025-12-06T09:07:01.784357575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268356,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:08:12.097038519Z",
	            "FinishedAt": "2025-12-06T09:08:11.19409041Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/hosts",
	        "LogPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f-json.log",
	        "Name": "/old-k8s-version-322324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-322324:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-322324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f",
	                "LowerDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-322324",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-322324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-322324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-322324",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-322324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5360085a5dff562540686582b084b9206fb75586d4eef45fb0c2e17730edf02f",
	            "SandboxKey": "/var/run/docker/netns/5360085a5dff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-322324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6aeaf0351aa11b8b99e15f127f61cc1457ec80dfb36963930d49a8cf393d88b",
	                    "EndpointID": "24c67bc4bd66e8b1b09095bfa06c5fa81088e5f58de1c42fe58428ba1f7c4820",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fa:29:67:b8:e2:42",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-322324",
	                        "7e0820bc743c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324: exit status 2 (344.631514ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-322324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-322324 logs -n 25: (2.038912527s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p NoKubernetes-328079 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ -p NoKubernetes-328079 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ delete  │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-322324 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ stop    │ -p no-preload-769733 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-322324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                     │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                    │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                  │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                               │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                               │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:07.781061  282948 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:07.781355  282948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:07.781366  282948 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:07.781372  282948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:07.781595  282948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:07.782109  282948 out.go:368] Setting JSON to false
	I1206 09:09:07.783505  282948 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3099,"bootTime":1765009049,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:07.783588  282948 start.go:143] virtualization: kvm guest
	I1206 09:09:07.785593  282948 out.go:179] * [default-k8s-diff-port-213278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:07.787310  282948 notify.go:221] Checking for updates...
	I1206 09:09:07.787346  282948 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:07.788797  282948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:07.789998  282948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:07.791256  282948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:07.792967  282948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:07.795295  282948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:07.797267  282948 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:07.797404  282948 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:07.797535  282948 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:09:07.797659  282948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:07.824410  282948 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:07.824537  282948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:07.886126  282948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:07.875601828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:07.886233  282948 docker.go:319] overlay module found
	I1206 09:09:07.887931  282948 out.go:179] * Using the docker driver based on user configuration
	I1206 09:09:07.889291  282948 start.go:309] selected driver: docker
	I1206 09:09:07.889310  282948 start.go:927] validating driver "docker" against <nil>
	I1206 09:09:07.889323  282948 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:07.889912  282948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:07.950516  282948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:07.940060335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:07.950743  282948 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:09:07.951033  282948 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:09:07.957079  282948 out.go:179] * Using Docker driver with root privileges
	I1206 09:09:07.959247  282948 cni.go:84] Creating CNI manager for ""
	I1206 09:09:07.959341  282948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:07.959362  282948 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:09:07.959461  282948 start.go:353] cluster config:
	{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:07.961050  282948 out.go:179] * Starting "default-k8s-diff-port-213278" primary control-plane node in "default-k8s-diff-port-213278" cluster
	I1206 09:09:07.962278  282948 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:07.963476  282948 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:07.964504  282948 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:07.964544  282948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:07.964566  282948 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:07.964600  282948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:07.964653  282948 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:07.964661  282948 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:09:07.964736  282948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:09:07.964750  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json: {Name:mk749be6f3b06ee84322203f3d8663effbbdb2b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:07.990589  282948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:07.990618  282948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:07.990636  282948 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:07.990679  282948 start.go:360] acquireMachinesLock for default-k8s-diff-port-213278: {Name:mk866228eff8eb9f8cbf106e77f0dc837aabddf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:07.990811  282948 start.go:364] duration metric: took 107.653µs to acquireMachinesLock for "default-k8s-diff-port-213278"
	I1206 09:09:07.990849  282948 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:07.990932  282948 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:09:06.052094  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:09:06.060053  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:09:06.060121  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:09:06.067147  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:09:06.074415  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:09:06.074468  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:09:06.081749  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:09:06.089051  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:09:06.089154  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:09:06.096652  278230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:09:06.155590  278230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:09:06.217000  278230 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Dec 06 09:08:39 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:39.799832645Z" level=info msg="Created container e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl/kubernetes-dashboard" id=3f848543-f008-4246-9cd6-d4ad25f66cf9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:39 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:39.800757775Z" level=info msg="Starting container: e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344" id=f3db9055-032a-46dc-a87c-5da7da367b78 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:39 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:39.803104336Z" level=info msg="Started container" PID=1726 containerID=e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl/kubernetes-dashboard id=f3db9055-032a-46dc-a87c-5da7da367b78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a8a0b0e17aa465206af94c70ecdb8b3ae4397a231276867568417257bab40ff
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.302496327Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=03bb140e-ad2e-4d0a-a1d4-abd23ea6ad0a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.30487068Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3e031413-6f11-4664-b236-676e916b59be name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.306139032Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=715d8efd-f5a7-42f2-b1e9-a4387df5b29e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.306285087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.312362574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.312555867Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fd01d2cf56c6e7c671a4ab9dabe05990626bfb07967d47356695400d300179fa/merged/etc/passwd: no such file or directory"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.312681473Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fd01d2cf56c6e7c671a4ab9dabe05990626bfb07967d47356695400d300179fa/merged/etc/group: no such file or directory"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.313025392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.339094085Z" level=info msg="Created container bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7: kube-system/storage-provisioner/storage-provisioner" id=715d8efd-f5a7-42f2-b1e9-a4387df5b29e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.339698417Z" level=info msg="Starting container: bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7" id=fcd6ebfe-d446-4301-8c40-0f269a83aa70 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.341428294Z" level=info msg="Started container" PID=1751 containerID=bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7 description=kube-system/storage-provisioner/storage-provisioner id=fcd6ebfe-d446-4301-8c40-0f269a83aa70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2c0312d8340520c023d58145396191eace3f38806cc206dfe631bda3d557e51
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.191633563Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b5287989-3df1-455f-9ab4-49c96f3d2b5a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.221904315Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=93db0756-344c-423f-842b-1fe854bb844b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.223276443Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper" id=5728b536-747e-4a2c-8990-6c469d4ca61a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.223429687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.281302584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.282275223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.316080907Z" level=info msg="Created container 2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper" id=5728b536-747e-4a2c-8990-6c469d4ca61a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.316719244Z" level=info msg="Starting container: 2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559" id=26dfaee0-7102-43f7-9312-d87d54800e54 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.318926099Z" level=info msg="Started container" PID=1769 containerID=2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper id=26dfaee0-7102-43f7-9312-d87d54800e54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=993e6c747bffaff30d6132aed133a405734f3658de07050934f15340bc6706c6
	Dec 06 09:08:56 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:56.31765941Z" level=info msg="Removing container: ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8" id=83e0b6bf-868d-4782-bf47-5d7e5ecbd099 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:08:56 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:56.330189193Z" level=info msg="Removed container ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper" id=83e0b6bf-868d-4782-bf47-5d7e5ecbd099 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2b35be0a159e1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   993e6c747bffa       dashboard-metrics-scraper-5f989dc9cf-phv8q       kubernetes-dashboard
	bdd5bafc12871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   d2c0312d83405       storage-provisioner                              kube-system
	e4c594281c21d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   0a8a0b0e17aa4       kubernetes-dashboard-8694d4445c-62nsl            kubernetes-dashboard
	e9e02ff4d6e79       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   d381f8b624484       coredns-5dd5756b68-gf4kq                         kube-system
	e001b5d895ee9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   e1813f4264bdb       busybox                                          default
	cb6d5b477d48c       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   5004278dedc67       kube-proxy-pspsz                                 kube-system
	f19651bfbff8f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   4827ca9c95c77       kindnet-fn4nn                                    kube-system
	a79a78b8625bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   d2c0312d83405       storage-provisioner                              kube-system
	3d02391cf9525       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   6e2843d7fb7ce       etcd-old-k8s-version-322324                      kube-system
	449ca9b6a1f1e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   b2c8a0bf7c51d       kube-controller-manager-old-k8s-version-322324   kube-system
	cd24c2425e3bc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   cf2107a78ec91       kube-apiserver-old-k8s-version-322324            kube-system
	2b2dc64bd75ea       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   c28f9cc8f9745       kube-scheduler-old-k8s-version-322324            kube-system
	
	
	==> coredns [e9e02ff4d6e79c95e30cc7a811aafbfa34d658f1561ea8f3b6c2269ad2e1397b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40225 - 7770 "HINFO IN 2873878907286317631.7812860973207805637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018686056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-322324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-322324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=old-k8s-version-322324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_07_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-322324
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:09:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-322324
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                a183c0fa-92d5-4537-8c49-640a14d95f5a
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-gf4kq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-322324                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-fn4nn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-322324             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-322324    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-pspsz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-322324             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-phv8q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-62nsl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-322324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-322324 event: Registered Node old-k8s-version-322324 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-322324 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-322324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-322324 event: Registered Node old-k8s-version-322324 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [3d02391cf9525e0f4953634ac19b398b1652701a94dddd15a44811b30512ea3a] <==
	{"level":"info","ts":"2025-12-06T09:08:18.752891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-06T09:08:18.752979Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-06T09:08:18.753131Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:08:18.753205Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:08:18.755305Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-06T09:08:18.755455Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T09:08:18.755514Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T09:08:18.75559Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-06T09:08:18.755618Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-06T09:08:20.045322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-06T09:08:20.045371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-06T09:08:20.045394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-06T09:08:20.045409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.045415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.045422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.045429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.046536Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-322324 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-06T09:08:20.046549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:08:20.046578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:08:20.046771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-06T09:08:20.046801Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-06T09:08:20.047727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-06T09:08:20.04773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-06T09:08:39.387083Z","caller":"traceutil/trace.go:171","msg":"trace[1814917904] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"109.905239ms","start":"2025-12-06T09:08:39.277143Z","end":"2025-12-06T09:08:39.387048Z","steps":["trace[1814917904] 'process raft request'  (duration: 87.39612ms)","trace[1814917904] 'compare'  (duration: 22.356695ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:09:12.380408Z","caller":"traceutil/trace.go:171","msg":"trace[1974420864] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"133.96369ms","start":"2025-12-06T09:09:12.246409Z","end":"2025-12-06T09:09:12.380373Z","steps":["trace[1974420864] 'process raft request'  (duration: 126.086152ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:12 up 51 min,  0 user,  load average: 2.93, 2.29, 1.71
	Linux old-k8s-version-322324 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f19651bfbff8fcbd9e841cc22d3284b680b67b091a55ceb311089f61ad655413] <==
	I1206 09:08:21.816667       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:08:21.816890       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:08:21.817116       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:08:21.817151       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:08:21.817180       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:08:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:08:22.113115       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:08:22.113246       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:08:22.113339       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:08:22.113845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:08:22.413594       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:08:22.413623       1 metrics.go:72] Registering metrics
	I1206 09:08:22.413677       1 controller.go:711] "Syncing nftables rules"
	I1206 09:08:32.018476       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:08:32.018519       1 main.go:301] handling current node
	I1206 09:08:42.018459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:08:42.018486       1 main.go:301] handling current node
	I1206 09:08:52.017815       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:08:52.017846       1 main.go:301] handling current node
	I1206 09:09:02.017508       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:09:02.017547       1 main.go:301] handling current node
	I1206 09:09:12.024078       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:09:12.024120       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd24c2425e3bca0058ef42378cc74157a256df97dddd9fdcb2d5c40e9fe7acd1] <==
	I1206 09:08:21.049827       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:08:21.101804       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:08:21.102388       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1206 09:08:21.102611       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 09:08:21.102647       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1206 09:08:21.102649       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 09:08:21.102657       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1206 09:08:21.102668       1 aggregator.go:166] initial CRD sync complete...
	I1206 09:08:21.102681       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 09:08:21.102688       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:08:21.102696       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:08:21.103063       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1206 09:08:21.107696       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:08:21.144597       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1206 09:08:21.987342       1 controller.go:624] quota admission added evaluator for: namespaces
	I1206 09:08:22.005470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:08:22.020235       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 09:08:22.040206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:08:22.048737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:08:22.056433       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 09:08:22.093047       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.79.218"}
	I1206 09:08:22.104956       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.37.47"}
	I1206 09:08:33.940973       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:08:33.955847       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1206 09:08:33.975152       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [449ca9b6a1f1ebf09546774acd9a2b46ec5e9f317ca3c7946c4a21ff592aa0a1] <==
	I1206 09:08:33.986341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.922324ms"
	I1206 09:08:33.987623       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1206 09:08:33.996582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.596623ms"
	I1206 09:08:33.996664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.75µs"
	I1206 09:08:33.996804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.372576ms"
	I1206 09:08:33.996866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.85µs"
	I1206 09:08:34.001499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.605µs"
	I1206 09:08:34.013557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.004µs"
	I1206 09:08:34.037823       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:08:34.043422       1 shared_informer.go:318] Caches are synced for crt configmap
	I1206 09:08:34.056651       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:08:34.070837       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1206 09:08:34.081031       1 shared_informer.go:318] Caches are synced for persistent volume
	I1206 09:08:34.467191       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:08:34.467228       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1206 09:08:34.485599       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:08:37.274752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="155.625µs"
	I1206 09:08:38.276486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.888µs"
	I1206 09:08:39.388960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.11µs"
	I1206 09:08:40.287420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.975614ms"
	I1206 09:08:40.287584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.419µs"
	I1206 09:08:55.281301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.707396ms"
	I1206 09:08:55.282908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.988µs"
	I1206 09:08:56.329460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.739µs"
	I1206 09:09:04.302969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.421µs"
	
	
	==> kube-proxy [cb6d5b477d48c1e0e8c2ebe41e2734e7a810d702cdea465f3a9dc52ad2db62bc] <==
	I1206 09:08:21.617413       1 server_others.go:69] "Using iptables proxy"
	I1206 09:08:21.626148       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1206 09:08:21.643155       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:08:21.645717       1 server_others.go:152] "Using iptables Proxier"
	I1206 09:08:21.645758       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 09:08:21.645769       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 09:08:21.645794       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 09:08:21.646013       1 server.go:846] "Version info" version="v1.28.0"
	I1206 09:08:21.646024       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:21.646524       1 config.go:188] "Starting service config controller"
	I1206 09:08:21.646581       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 09:08:21.646599       1 config.go:315] "Starting node config controller"
	I1206 09:08:21.646601       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 09:08:21.646655       1 config.go:97] "Starting endpoint slice config controller"
	I1206 09:08:21.646671       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 09:08:21.746885       1 shared_informer.go:318] Caches are synced for node config
	I1206 09:08:21.746921       1 shared_informer.go:318] Caches are synced for service config
	I1206 09:08:21.747020       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2b2dc64bd75ea60632c5aec9a4334196e82d82d7f050472813594206b55966f8] <==
	I1206 09:08:19.271884       1 serving.go:348] Generated self-signed cert in-memory
	W1206 09:08:21.036124       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:08:21.036156       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:08:21.036169       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:08:21.036179       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:08:21.057122       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1206 09:08:21.057147       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:21.058305       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:08:21.058354       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 09:08:21.059793       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 09:08:21.059812       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1206 09:08:21.158612       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.108940     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7ea6dc09-188d-4c21-95e5-40545faaeb74-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-phv8q\" (UID: \"7ea6dc09-188d-4c21-95e5-40545faaeb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q"
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.109063     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqj9k\" (UniqueName: \"kubernetes.io/projected/c4ee64b5-d6b4-4ecd-babe-35539026efe9-kube-api-access-mqj9k\") pod \"kubernetes-dashboard-8694d4445c-62nsl\" (UID: \"c4ee64b5-d6b4-4ecd-babe-35539026efe9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl"
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.109119     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c4ee64b5-d6b4-4ecd-babe-35539026efe9-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-62nsl\" (UID: \"c4ee64b5-d6b4-4ecd-babe-35539026efe9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl"
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.109151     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8mw\" (UniqueName: \"kubernetes.io/projected/7ea6dc09-188d-4c21-95e5-40545faaeb74-kube-api-access-4v8mw\") pod \"dashboard-metrics-scraper-5f989dc9cf-phv8q\" (UID: \"7ea6dc09-188d-4c21-95e5-40545faaeb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q"
	Dec 06 09:08:37 old-k8s-version-322324 kubelet[735]: I1206 09:08:37.258182     735 scope.go:117] "RemoveContainer" containerID="12e45949ce6fd28e38f9ce646a9974935121df9567f1bab409f8c49bc399829a"
	Dec 06 09:08:38 old-k8s-version-322324 kubelet[735]: I1206 09:08:38.262402     735 scope.go:117] "RemoveContainer" containerID="12e45949ce6fd28e38f9ce646a9974935121df9567f1bab409f8c49bc399829a"
	Dec 06 09:08:38 old-k8s-version-322324 kubelet[735]: I1206 09:08:38.262547     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:38 old-k8s-version-322324 kubelet[735]: E1206 09:08:38.263153     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:08:39 old-k8s-version-322324 kubelet[735]: I1206 09:08:39.267675     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:39 old-k8s-version-322324 kubelet[735]: E1206 09:08:39.268075     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:08:40 old-k8s-version-322324 kubelet[735]: I1206 09:08:40.281777     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl" podStartSLOduration=1.840202965 podCreationTimestamp="2025-12-06 09:08:33 +0000 UTC" firstStartedPulling="2025-12-06 09:08:34.315297894 +0000 UTC m=+16.214435068" lastFinishedPulling="2025-12-06 09:08:39.756804692 +0000 UTC m=+21.655941862" observedRunningTime="2025-12-06 09:08:40.281049013 +0000 UTC m=+22.180186198" watchObservedRunningTime="2025-12-06 09:08:40.281709759 +0000 UTC m=+22.180846945"
	Dec 06 09:08:44 old-k8s-version-322324 kubelet[735]: I1206 09:08:44.289428     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:44 old-k8s-version-322324 kubelet[735]: E1206 09:08:44.289714     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:08:52 old-k8s-version-322324 kubelet[735]: I1206 09:08:52.301705     735 scope.go:117] "RemoveContainer" containerID="a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f"
	Dec 06 09:08:55 old-k8s-version-322324 kubelet[735]: I1206 09:08:55.190900     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:56 old-k8s-version-322324 kubelet[735]: I1206 09:08:56.316356     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:56 old-k8s-version-322324 kubelet[735]: I1206 09:08:56.316898     735 scope.go:117] "RemoveContainer" containerID="2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	Dec 06 09:08:56 old-k8s-version-322324 kubelet[735]: E1206 09:08:56.317296     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:09:04 old-k8s-version-322324 kubelet[735]: I1206 09:09:04.289416     735 scope.go:117] "RemoveContainer" containerID="2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	Dec 06 09:09:04 old-k8s-version-322324 kubelet[735]: E1206 09:09:04.289868     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:09:09 old-k8s-version-322324 kubelet[735]: I1206 09:09:09.375067     735 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: kubelet.service: Consumed 1.520s CPU time.
	
	
	==> kubernetes-dashboard [e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344] <==
	2025/12/06 09:08:39 Starting overwatch
	2025/12/06 09:08:39 Using namespace: kubernetes-dashboard
	2025/12/06 09:08:39 Using in-cluster config to connect to apiserver
	2025/12/06 09:08:39 Using secret token for csrf signing
	2025/12/06 09:08:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:08:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:08:39 Successful initial request to the apiserver, version: v1.28.0
	2025/12/06 09:08:39 Generating JWE encryption key
	2025/12/06 09:08:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:08:39 Initializing JWE encryption key from synchronized object
	2025/12/06 09:08:39 Creating in-cluster Sidecar client
	2025/12/06 09:08:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:08:39 Serving insecurely on HTTP port: 9090
	2025/12/06 09:09:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f] <==
	I1206 09:08:21.581844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:08:51.585405       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7] <==
	I1206 09:08:52.355249       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:08:52.364464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:08:52.364499       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 09:09:09.761980       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:09:09.762194       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-322324_9b19734c-896a-4d8c-add9-533af776e9bd!
	I1206 09:09:09.762653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12abded4-5e7f-4c39-bde6-291e3d08af94", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-322324_9b19734c-896a-4d8c-add9-533af776e9bd became leader
	I1206 09:09:09.862421       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-322324_9b19734c-896a-4d8c-add9-533af776e9bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-322324 -n old-k8s-version-322324
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-322324 -n old-k8s-version-322324: exit status 2 (462.79894ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-322324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-322324
helpers_test.go:243: (dbg) docker inspect old-k8s-version-322324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f",
	        "Created": "2025-12-06T09:07:01.784357575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268356,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:08:12.097038519Z",
	            "FinishedAt": "2025-12-06T09:08:11.19409041Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/hosts",
	        "LogPath": "/var/lib/docker/containers/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f/7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f-json.log",
	        "Name": "/old-k8s-version-322324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-322324:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-322324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e0820bc743cdd6e5cc97d51b92ff1625e642e86b01cc7d59a095127797e371f",
	                "LowerDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f4d24fe5a5d801cfc890d8b06797fc43c5493112323255ab88884581f41994e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-322324",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-322324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-322324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-322324",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-322324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5360085a5dff562540686582b084b9206fb75586d4eef45fb0c2e17730edf02f",
	            "SandboxKey": "/var/run/docker/netns/5360085a5dff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-322324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6aeaf0351aa11b8b99e15f127f61cc1457ec80dfb36963930d49a8cf393d88b",
	                    "EndpointID": "24c67bc4bd66e8b1b09095bfa06c5fa81088e5f58de1c42fe58428ba1f7c4820",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fa:29:67:b8:e2:42",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-322324",
	                        "7e0820bc743c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324: exit status 2 (426.071389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-322324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-322324 logs -n 25: (1.50547324s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:06 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p NoKubernetes-328079 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ -p NoKubernetes-328079 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ delete  │ -p NoKubernetes-328079                                                                                                                                                                                                                        │ NoKubernetes-328079          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-322324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-322324 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ stop    │ -p no-preload-769733 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-322324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                     │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                    │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                  │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                               │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                               │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:07.781061  282948 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:07.781355  282948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:07.781366  282948 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:07.781372  282948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:07.781595  282948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:07.782109  282948 out.go:368] Setting JSON to false
	I1206 09:09:07.783505  282948 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3099,"bootTime":1765009049,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:07.783588  282948 start.go:143] virtualization: kvm guest
	I1206 09:09:07.785593  282948 out.go:179] * [default-k8s-diff-port-213278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:07.787310  282948 notify.go:221] Checking for updates...
	I1206 09:09:07.787346  282948 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:07.788797  282948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:07.789998  282948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:07.791256  282948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:07.792967  282948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:07.795295  282948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:07.797267  282948 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:07.797404  282948 config.go:182] Loaded profile config "no-preload-769733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:07.797535  282948 config.go:182] Loaded profile config "old-k8s-version-322324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:09:07.797659  282948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:07.824410  282948 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:07.824537  282948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:07.886126  282948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:07.875601828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:07.886233  282948 docker.go:319] overlay module found
	I1206 09:09:07.887931  282948 out.go:179] * Using the docker driver based on user configuration
	I1206 09:09:07.889291  282948 start.go:309] selected driver: docker
	I1206 09:09:07.889310  282948 start.go:927] validating driver "docker" against <nil>
	I1206 09:09:07.889323  282948 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:07.889912  282948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:07.950516  282948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:07.940060335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:07.950743  282948 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:09:07.951033  282948 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:09:07.957079  282948 out.go:179] * Using Docker driver with root privileges
	I1206 09:09:07.959247  282948 cni.go:84] Creating CNI manager for ""
	I1206 09:09:07.959341  282948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:07.959362  282948 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:09:07.959461  282948 start.go:353] cluster config:
	{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:07.961050  282948 out.go:179] * Starting "default-k8s-diff-port-213278" primary control-plane node in "default-k8s-diff-port-213278" cluster
	I1206 09:09:07.962278  282948 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:07.963476  282948 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:07.964504  282948 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:07.964544  282948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:07.964566  282948 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:07.964600  282948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:07.964653  282948 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:07.964661  282948 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:09:07.964736  282948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:09:07.964750  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json: {Name:mk749be6f3b06ee84322203f3d8663effbbdb2b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:07.990589  282948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:07.990618  282948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:07.990636  282948 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:07.990679  282948 start.go:360] acquireMachinesLock for default-k8s-diff-port-213278: {Name:mk866228eff8eb9f8cbf106e77f0dc837aabddf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:07.990811  282948 start.go:364] duration metric: took 107.653µs to acquireMachinesLock for "default-k8s-diff-port-213278"
	I1206 09:09:07.990849  282948 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:07.990932  282948 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:09:06.052094  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:09:06.060053  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:09:06.060121  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:09:06.067147  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:09:06.074415  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:09:06.074468  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:09:06.081749  278230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:09:06.089051  278230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:09:06.089154  278230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:09:06.096652  278230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:09:06.155590  278230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:09:06.217000  278230 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:09:07.993939  282948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:09:07.994250  282948 start.go:159] libmachine.API.Create for "default-k8s-diff-port-213278" (driver="docker")
	I1206 09:09:07.994292  282948 client.go:173] LocalClient.Create starting
	I1206 09:09:07.994368  282948 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:09:07.994410  282948 main.go:143] libmachine: Decoding PEM data...
	I1206 09:09:07.994436  282948 main.go:143] libmachine: Parsing certificate...
	I1206 09:09:07.994521  282948 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:09:07.994547  282948 main.go:143] libmachine: Decoding PEM data...
	I1206 09:09:07.994561  282948 main.go:143] libmachine: Parsing certificate...
	I1206 09:09:07.994980  282948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213278 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:09:08.015064  282948 cli_runner.go:211] docker network inspect default-k8s-diff-port-213278 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:09:08.015121  282948 network_create.go:284] running [docker network inspect default-k8s-diff-port-213278] to gather additional debugging logs...
	I1206 09:09:08.015149  282948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213278
	W1206 09:09:08.032817  282948 cli_runner.go:211] docker network inspect default-k8s-diff-port-213278 returned with exit code 1
	I1206 09:09:08.032873  282948 network_create.go:287] error running [docker network inspect default-k8s-diff-port-213278]: docker network inspect default-k8s-diff-port-213278: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-213278 not found
	I1206 09:09:08.032893  282948 network_create.go:289] output of [docker network inspect default-k8s-diff-port-213278]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-213278 not found
	
	** /stderr **
	I1206 09:09:08.033019  282948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:08.056286  282948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:09:08.057146  282948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:09:08.057970  282948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:09:08.058566  282948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f6aeaf0351aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:f6:31:65:11:00} reservation:<nil>}
	I1206 09:09:08.059429  282948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e98700}
	I1206 09:09:08.059456  282948 network_create.go:124] attempt to create docker network default-k8s-diff-port-213278 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:09:08.059519  282948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-213278 default-k8s-diff-port-213278
	I1206 09:09:08.113867  282948 network_create.go:108] docker network default-k8s-diff-port-213278 192.168.85.0/24 created
	I1206 09:09:08.113900  282948 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-213278" container
	I1206 09:09:08.113975  282948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:09:08.135461  282948 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-213278 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213278 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:09:08.155890  282948 oci.go:103] Successfully created a docker volume default-k8s-diff-port-213278
	I1206 09:09:08.156000  282948 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-213278-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-213278 --entrypoint /usr/bin/test -v default-k8s-diff-port-213278:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:09:08.622673  282948 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-213278
	I1206 09:09:08.622726  282948 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:08.622737  282948 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:09:08.622804  282948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-213278:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 06 09:08:39 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:39.799832645Z" level=info msg="Created container e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl/kubernetes-dashboard" id=3f848543-f008-4246-9cd6-d4ad25f66cf9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:39 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:39.800757775Z" level=info msg="Starting container: e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344" id=f3db9055-032a-46dc-a87c-5da7da367b78 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:39 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:39.803104336Z" level=info msg="Started container" PID=1726 containerID=e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl/kubernetes-dashboard id=f3db9055-032a-46dc-a87c-5da7da367b78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a8a0b0e17aa465206af94c70ecdb8b3ae4397a231276867568417257bab40ff
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.302496327Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=03bb140e-ad2e-4d0a-a1d4-abd23ea6ad0a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.30487068Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3e031413-6f11-4664-b236-676e916b59be name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.306139032Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=715d8efd-f5a7-42f2-b1e9-a4387df5b29e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.306285087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.312362574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.312555867Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fd01d2cf56c6e7c671a4ab9dabe05990626bfb07967d47356695400d300179fa/merged/etc/passwd: no such file or directory"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.312681473Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fd01d2cf56c6e7c671a4ab9dabe05990626bfb07967d47356695400d300179fa/merged/etc/group: no such file or directory"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.313025392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.339094085Z" level=info msg="Created container bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7: kube-system/storage-provisioner/storage-provisioner" id=715d8efd-f5a7-42f2-b1e9-a4387df5b29e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.339698417Z" level=info msg="Starting container: bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7" id=fcd6ebfe-d446-4301-8c40-0f269a83aa70 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:52 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:52.341428294Z" level=info msg="Started container" PID=1751 containerID=bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7 description=kube-system/storage-provisioner/storage-provisioner id=fcd6ebfe-d446-4301-8c40-0f269a83aa70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2c0312d8340520c023d58145396191eace3f38806cc206dfe631bda3d557e51
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.191633563Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b5287989-3df1-455f-9ab4-49c96f3d2b5a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.221904315Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=93db0756-344c-423f-842b-1fe854bb844b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.223276443Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper" id=5728b536-747e-4a2c-8990-6c469d4ca61a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.223429687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.281302584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.282275223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.316080907Z" level=info msg="Created container 2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper" id=5728b536-747e-4a2c-8990-6c469d4ca61a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.316719244Z" level=info msg="Starting container: 2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559" id=26dfaee0-7102-43f7-9312-d87d54800e54 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:08:55 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:55.318926099Z" level=info msg="Started container" PID=1769 containerID=2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper id=26dfaee0-7102-43f7-9312-d87d54800e54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=993e6c747bffaff30d6132aed133a405734f3658de07050934f15340bc6706c6
	Dec 06 09:08:56 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:56.31765941Z" level=info msg="Removing container: ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8" id=83e0b6bf-868d-4782-bf47-5d7e5ecbd099 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:08:56 old-k8s-version-322324 crio[572]: time="2025-12-06T09:08:56.330189193Z" level=info msg="Removed container ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q/dashboard-metrics-scraper" id=83e0b6bf-868d-4782-bf47-5d7e5ecbd099 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2b35be0a159e1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   993e6c747bffa       dashboard-metrics-scraper-5f989dc9cf-phv8q       kubernetes-dashboard
	bdd5bafc12871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   d2c0312d83405       storage-provisioner                              kube-system
	e4c594281c21d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   0a8a0b0e17aa4       kubernetes-dashboard-8694d4445c-62nsl            kubernetes-dashboard
	e9e02ff4d6e79       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   d381f8b624484       coredns-5dd5756b68-gf4kq                         kube-system
	e001b5d895ee9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   e1813f4264bdb       busybox                                          default
	cb6d5b477d48c       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   5004278dedc67       kube-proxy-pspsz                                 kube-system
	f19651bfbff8f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   4827ca9c95c77       kindnet-fn4nn                                    kube-system
	a79a78b8625bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   d2c0312d83405       storage-provisioner                              kube-system
	3d02391cf9525       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   6e2843d7fb7ce       etcd-old-k8s-version-322324                      kube-system
	449ca9b6a1f1e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   b2c8a0bf7c51d       kube-controller-manager-old-k8s-version-322324   kube-system
	cd24c2425e3bc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   cf2107a78ec91       kube-apiserver-old-k8s-version-322324            kube-system
	2b2dc64bd75ea       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   c28f9cc8f9745       kube-scheduler-old-k8s-version-322324            kube-system
	
	
	==> coredns [e9e02ff4d6e79c95e30cc7a811aafbfa34d658f1561ea8f3b6c2269ad2e1397b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40225 - 7770 "HINFO IN 2873878907286317631.7812860973207805637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018686056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-322324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-322324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=old-k8s-version-322324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_07_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-322324
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:09:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:08:51 +0000   Sat, 06 Dec 2025 09:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-322324
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                a183c0fa-92d5-4537-8c49-640a14d95f5a
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-gf4kq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-322324                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-fn4nn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-322324             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-322324    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-pspsz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-322324             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-phv8q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-62nsl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node old-k8s-version-322324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-322324 event: Registered Node old-k8s-version-322324 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-322324 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-322324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-322324 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-322324 event: Registered Node old-k8s-version-322324 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [3d02391cf9525e0f4953634ac19b398b1652701a94dddd15a44811b30512ea3a] <==
	{"level":"info","ts":"2025-12-06T09:08:18.752891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-06T09:08:18.752979Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-06T09:08:18.753131Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:08:18.753205Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-06T09:08:18.755305Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-06T09:08:18.755455Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T09:08:18.755514Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-06T09:08:18.75559Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-06T09:08:18.755618Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-06T09:08:20.045322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-06T09:08:20.045371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-06T09:08:20.045394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-06T09:08:20.045409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.045415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.045422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.045429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-06T09:08:20.046536Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-322324 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-06T09:08:20.046549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:08:20.046578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-06T09:08:20.046771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-06T09:08:20.046801Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-06T09:08:20.047727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-06T09:08:20.04773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-06T09:08:39.387083Z","caller":"traceutil/trace.go:171","msg":"trace[1814917904] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"109.905239ms","start":"2025-12-06T09:08:39.277143Z","end":"2025-12-06T09:08:39.387048Z","steps":["trace[1814917904] 'process raft request'  (duration: 87.39612ms)","trace[1814917904] 'compare'  (duration: 22.356695ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:09:12.380408Z","caller":"traceutil/trace.go:171","msg":"trace[1974420864] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"133.96369ms","start":"2025-12-06T09:09:12.246409Z","end":"2025-12-06T09:09:12.380373Z","steps":["trace[1974420864] 'process raft request'  (duration: 126.086152ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:15 up 51 min,  0 user,  load average: 2.93, 2.29, 1.71
	Linux old-k8s-version-322324 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f19651bfbff8fcbd9e841cc22d3284b680b67b091a55ceb311089f61ad655413] <==
	I1206 09:08:21.816667       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:08:21.816890       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1206 09:08:21.817116       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:08:21.817151       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:08:21.817180       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:08:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:08:22.113115       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:08:22.113246       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:08:22.113339       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:08:22.113845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:08:22.413594       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:08:22.413623       1 metrics.go:72] Registering metrics
	I1206 09:08:22.413677       1 controller.go:711] "Syncing nftables rules"
	I1206 09:08:32.018476       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:08:32.018519       1 main.go:301] handling current node
	I1206 09:08:42.018459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:08:42.018486       1 main.go:301] handling current node
	I1206 09:08:52.017815       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:08:52.017846       1 main.go:301] handling current node
	I1206 09:09:02.017508       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:09:02.017547       1 main.go:301] handling current node
	I1206 09:09:12.024078       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1206 09:09:12.024120       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd24c2425e3bca0058ef42378cc74157a256df97dddd9fdcb2d5c40e9fe7acd1] <==
	I1206 09:08:21.049827       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:08:21.101804       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:08:21.102388       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1206 09:08:21.102611       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 09:08:21.102647       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1206 09:08:21.102649       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 09:08:21.102657       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1206 09:08:21.102668       1 aggregator.go:166] initial CRD sync complete...
	I1206 09:08:21.102681       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 09:08:21.102688       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:08:21.102696       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:08:21.103063       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1206 09:08:21.107696       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:08:21.144597       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1206 09:08:21.987342       1 controller.go:624] quota admission added evaluator for: namespaces
	I1206 09:08:22.005470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:08:22.020235       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 09:08:22.040206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:08:22.048737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:08:22.056433       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 09:08:22.093047       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.79.218"}
	I1206 09:08:22.104956       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.37.47"}
	I1206 09:08:33.940973       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 09:08:33.955847       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1206 09:08:33.975152       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [449ca9b6a1f1ebf09546774acd9a2b46ec5e9f317ca3c7946c4a21ff592aa0a1] <==
	I1206 09:08:33.986341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.922324ms"
	I1206 09:08:33.987623       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1206 09:08:33.996582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.596623ms"
	I1206 09:08:33.996664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.75µs"
	I1206 09:08:33.996804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.372576ms"
	I1206 09:08:33.996866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.85µs"
	I1206 09:08:34.001499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.605µs"
	I1206 09:08:34.013557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.004µs"
	I1206 09:08:34.037823       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:08:34.043422       1 shared_informer.go:318] Caches are synced for crt configmap
	I1206 09:08:34.056651       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 09:08:34.070837       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1206 09:08:34.081031       1 shared_informer.go:318] Caches are synced for persistent volume
	I1206 09:08:34.467191       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:08:34.467228       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1206 09:08:34.485599       1 shared_informer.go:318] Caches are synced for garbage collector
	I1206 09:08:37.274752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="155.625µs"
	I1206 09:08:38.276486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.888µs"
	I1206 09:08:39.388960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.11µs"
	I1206 09:08:40.287420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.975614ms"
	I1206 09:08:40.287584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.419µs"
	I1206 09:08:55.281301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.707396ms"
	I1206 09:08:55.282908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.988µs"
	I1206 09:08:56.329460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.739µs"
	I1206 09:09:04.302969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.421µs"
	
	
	==> kube-proxy [cb6d5b477d48c1e0e8c2ebe41e2734e7a810d702cdea465f3a9dc52ad2db62bc] <==
	I1206 09:08:21.617413       1 server_others.go:69] "Using iptables proxy"
	I1206 09:08:21.626148       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1206 09:08:21.643155       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:08:21.645717       1 server_others.go:152] "Using iptables Proxier"
	I1206 09:08:21.645758       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 09:08:21.645769       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 09:08:21.645794       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 09:08:21.646013       1 server.go:846] "Version info" version="v1.28.0"
	I1206 09:08:21.646024       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:21.646524       1 config.go:188] "Starting service config controller"
	I1206 09:08:21.646581       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 09:08:21.646599       1 config.go:315] "Starting node config controller"
	I1206 09:08:21.646601       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 09:08:21.646655       1 config.go:97] "Starting endpoint slice config controller"
	I1206 09:08:21.646671       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 09:08:21.746885       1 shared_informer.go:318] Caches are synced for node config
	I1206 09:08:21.746921       1 shared_informer.go:318] Caches are synced for service config
	I1206 09:08:21.747020       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2b2dc64bd75ea60632c5aec9a4334196e82d82d7f050472813594206b55966f8] <==
	I1206 09:08:19.271884       1 serving.go:348] Generated self-signed cert in-memory
	W1206 09:08:21.036124       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:08:21.036156       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:08:21.036169       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:08:21.036179       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:08:21.057122       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1206 09:08:21.057147       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:08:21.058305       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:08:21.058354       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 09:08:21.059793       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 09:08:21.059812       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1206 09:08:21.158612       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.108940     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7ea6dc09-188d-4c21-95e5-40545faaeb74-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-phv8q\" (UID: \"7ea6dc09-188d-4c21-95e5-40545faaeb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q"
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.109063     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqj9k\" (UniqueName: \"kubernetes.io/projected/c4ee64b5-d6b4-4ecd-babe-35539026efe9-kube-api-access-mqj9k\") pod \"kubernetes-dashboard-8694d4445c-62nsl\" (UID: \"c4ee64b5-d6b4-4ecd-babe-35539026efe9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl"
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.109119     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c4ee64b5-d6b4-4ecd-babe-35539026efe9-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-62nsl\" (UID: \"c4ee64b5-d6b4-4ecd-babe-35539026efe9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl"
	Dec 06 09:08:34 old-k8s-version-322324 kubelet[735]: I1206 09:08:34.109151     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8mw\" (UniqueName: \"kubernetes.io/projected/7ea6dc09-188d-4c21-95e5-40545faaeb74-kube-api-access-4v8mw\") pod \"dashboard-metrics-scraper-5f989dc9cf-phv8q\" (UID: \"7ea6dc09-188d-4c21-95e5-40545faaeb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q"
	Dec 06 09:08:37 old-k8s-version-322324 kubelet[735]: I1206 09:08:37.258182     735 scope.go:117] "RemoveContainer" containerID="12e45949ce6fd28e38f9ce646a9974935121df9567f1bab409f8c49bc399829a"
	Dec 06 09:08:38 old-k8s-version-322324 kubelet[735]: I1206 09:08:38.262402     735 scope.go:117] "RemoveContainer" containerID="12e45949ce6fd28e38f9ce646a9974935121df9567f1bab409f8c49bc399829a"
	Dec 06 09:08:38 old-k8s-version-322324 kubelet[735]: I1206 09:08:38.262547     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:38 old-k8s-version-322324 kubelet[735]: E1206 09:08:38.263153     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:08:39 old-k8s-version-322324 kubelet[735]: I1206 09:08:39.267675     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:39 old-k8s-version-322324 kubelet[735]: E1206 09:08:39.268075     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:08:40 old-k8s-version-322324 kubelet[735]: I1206 09:08:40.281777     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-62nsl" podStartSLOduration=1.840202965 podCreationTimestamp="2025-12-06 09:08:33 +0000 UTC" firstStartedPulling="2025-12-06 09:08:34.315297894 +0000 UTC m=+16.214435068" lastFinishedPulling="2025-12-06 09:08:39.756804692 +0000 UTC m=+21.655941862" observedRunningTime="2025-12-06 09:08:40.281049013 +0000 UTC m=+22.180186198" watchObservedRunningTime="2025-12-06 09:08:40.281709759 +0000 UTC m=+22.180846945"
	Dec 06 09:08:44 old-k8s-version-322324 kubelet[735]: I1206 09:08:44.289428     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:44 old-k8s-version-322324 kubelet[735]: E1206 09:08:44.289714     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:08:52 old-k8s-version-322324 kubelet[735]: I1206 09:08:52.301705     735 scope.go:117] "RemoveContainer" containerID="a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f"
	Dec 06 09:08:55 old-k8s-version-322324 kubelet[735]: I1206 09:08:55.190900     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:56 old-k8s-version-322324 kubelet[735]: I1206 09:08:56.316356     735 scope.go:117] "RemoveContainer" containerID="ccf1fd3a457cf9d818acdc5505cd5264d8ace22d609c5fa43c35c02fe3d52fe8"
	Dec 06 09:08:56 old-k8s-version-322324 kubelet[735]: I1206 09:08:56.316898     735 scope.go:117] "RemoveContainer" containerID="2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	Dec 06 09:08:56 old-k8s-version-322324 kubelet[735]: E1206 09:08:56.317296     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:09:04 old-k8s-version-322324 kubelet[735]: I1206 09:09:04.289416     735 scope.go:117] "RemoveContainer" containerID="2b35be0a159e1769da8007b4feddc6d55bf5be0c1cfb0a77a4cab36a2c673559"
	Dec 06 09:09:04 old-k8s-version-322324 kubelet[735]: E1206 09:09:04.289868     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-phv8q_kubernetes-dashboard(7ea6dc09-188d-4c21-95e5-40545faaeb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-phv8q" podUID="7ea6dc09-188d-4c21-95e5-40545faaeb74"
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:09:09 old-k8s-version-322324 kubelet[735]: I1206 09:09:09.375067     735 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:09 old-k8s-version-322324 systemd[1]: kubelet.service: Consumed 1.520s CPU time.
	
	
	==> kubernetes-dashboard [e4c594281c21d2e729e44b50f233f2cfc8df59089644cb69f0e63cf192471344] <==
	2025/12/06 09:08:39 Using namespace: kubernetes-dashboard
	2025/12/06 09:08:39 Using in-cluster config to connect to apiserver
	2025/12/06 09:08:39 Using secret token for csrf signing
	2025/12/06 09:08:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:08:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:08:39 Successful initial request to the apiserver, version: v1.28.0
	2025/12/06 09:08:39 Generating JWE encryption key
	2025/12/06 09:08:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:08:39 Initializing JWE encryption key from synchronized object
	2025/12/06 09:08:39 Creating in-cluster Sidecar client
	2025/12/06 09:08:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:08:39 Serving insecurely on HTTP port: 9090
	2025/12/06 09:09:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:08:39 Starting overwatch
	
	
	==> storage-provisioner [a79a78b8625bbbf4f8feeaa1cead5d551510889d8cbc1377efeb7dbc4f117d3f] <==
	I1206 09:08:21.581844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:08:51.585405       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bdd5bafc128710ae75fa891a7d5ade311b6b5fa2ef975c10e636dc4c354b33a7] <==
	I1206 09:08:52.355249       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:08:52.364464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:08:52.364499       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 09:09:09.761980       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:09:09.762194       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-322324_9b19734c-896a-4d8c-add9-533af776e9bd!
	I1206 09:09:09.762653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12abded4-5e7f-4c39-bde6-291e3d08af94", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-322324_9b19734c-896a-4d8c-add9-533af776e9bd became leader
	I1206 09:09:09.862421       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-322324_9b19734c-896a-4d8c-add9-533af776e9bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-322324 -n old-k8s-version-322324
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-322324 -n old-k8s-version-322324: exit status 2 (361.285752ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-322324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (292.323457ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718157
helpers_test.go:243: (dbg) docker inspect newest-cni-718157:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c",
	        "Created": "2025-12-06T09:09:19.234709377Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:09:19.273250849Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/hosts",
	        "LogPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c-json.log",
	        "Name": "/newest-cni-718157",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718157:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-718157",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c",
	                "LowerDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718157",
	                "Source": "/var/lib/docker/volumes/newest-cni-718157/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718157",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718157",
	                "name.minikube.sigs.k8s.io": "newest-cni-718157",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3c1a53458892c45568d107fef1094638b12d93cef377b4e4239bcf1cd4cd61b6",
	            "SandboxKey": "/var/run/docker/netns/3c1a53458892",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-718157": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50d0f2baf000bc1c263e721b7068e9545be54f5ae74e0afeafff76b764fd61ec",
	                    "EndpointID": "f5b1ef51a086603c361a70e7f9e536bb357986673367a8d00c1debf717c35cae",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "a2:92:ca:b8:74:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718157",
	                        "a65b6e472b2d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-718157 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-718157 logs -n 25: (1.079307794s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p old-k8s-version-322324 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-769733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ stop    │ -p no-preload-769733 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-322324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                            │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                           │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                         │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                                      │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p auto-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:20.773036  289573 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:20.773172  289573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:20.773184  289573 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:20.773190  289573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:20.773456  289573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:20.774082  289573 out.go:368] Setting JSON to false
	I1206 09:09:20.775374  289573 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3112,"bootTime":1765009049,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:20.775451  289573 start.go:143] virtualization: kvm guest
	I1206 09:09:20.781161  289573 out.go:179] * [auto-646473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:20.782723  289573 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:20.782791  289573 notify.go:221] Checking for updates...
	I1206 09:09:20.785636  289573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:20.786955  289573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:20.788477  289573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:20.789977  289573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:20.791391  289573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:20.793407  289573 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:20.793557  289573 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:20.793700  289573 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:20.793854  289573 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:20.826114  289573 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:20.826281  289573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:20.908825  289573 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-06 09:09:20.895566223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:20.908980  289573 docker.go:319] overlay module found
	I1206 09:09:20.915120  289573 out.go:179] * Using the docker driver based on user configuration
	I1206 09:09:20.916715  289573 start.go:309] selected driver: docker
	I1206 09:09:20.916732  289573 start.go:927] validating driver "docker" against <nil>
	I1206 09:09:20.916746  289573 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:20.917542  289573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:20.990116  289573 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-06 09:09:20.97967006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:20.990303  289573 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:09:20.990600  289573 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:09:20.992483  289573 out.go:179] * Using Docker driver with root privileges
	I1206 09:09:20.993772  289573 cni.go:84] Creating CNI manager for ""
	I1206 09:09:20.993866  289573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:20.993881  289573 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:09:20.994011  289573 start.go:353] cluster config:
	{Name:auto-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1206 09:09:20.996100  289573 out.go:179] * Starting "auto-646473" primary control-plane node in "auto-646473" cluster
	I1206 09:09:20.997921  289573 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:20.999511  289573 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:19.295250  282948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213278 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:19.316851  282948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:19.322171  282948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:19.335337  282948 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:19.335522  282948 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:19.335593  282948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:19.373236  282948 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:19.373259  282948 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:19.373314  282948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:19.407677  282948 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:19.407703  282948 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:19.407713  282948 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1206 09:09:19.407801  282948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-213278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:19.407874  282948 ssh_runner.go:195] Run: crio config
	I1206 09:09:19.461843  282948 cni.go:84] Creating CNI manager for ""
	I1206 09:09:19.461877  282948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:19.461899  282948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:09:19.461929  282948 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-213278 NodeName:default-k8s-diff-port-213278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:19.462203  282948 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-213278"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:19.462279  282948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:09:19.472465  282948 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:19.472517  282948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:19.481404  282948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 09:09:19.494811  282948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:09:19.519257  282948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1206 09:09:19.533213  282948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:19.538605  282948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:19.552300  282948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:19.675422  282948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:19.701664  282948 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278 for IP: 192.168.85.2
	I1206 09:09:19.701693  282948 certs.go:195] generating shared ca certs ...
	I1206 09:09:19.701713  282948 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:19.701872  282948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:19.701934  282948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:19.701949  282948 certs.go:257] generating profile certs ...
	I1206 09:09:19.702030  282948 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.key
	I1206 09:09:19.702044  282948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.crt with IP's: []
	I1206 09:09:19.805704  282948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.crt ...
	I1206 09:09:19.805739  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.crt: {Name:mk55eb4e85cf2ca3b80df6c84fb578f40eac4c41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:19.805929  282948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.key ...
	I1206 09:09:19.805945  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.key: {Name:mk7716d118176a16e37724b3ebe60878da88754b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:19.806082  282948 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key.817b52b0
	I1206 09:09:19.806100  282948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt.817b52b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:09:19.960434  282948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt.817b52b0 ...
	I1206 09:09:19.960465  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt.817b52b0: {Name:mk8347aa9a61626c98e74d8cff9146ffa0d9a10c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:19.960642  282948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key.817b52b0 ...
	I1206 09:09:19.960664  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key.817b52b0: {Name:mkade499461e1bdc51792f989ee13198dcea6348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:19.960774  282948 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt.817b52b0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt
	I1206 09:09:19.960882  282948 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key.817b52b0 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key
	I1206 09:09:19.960972  282948 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key
	I1206 09:09:19.961005  282948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.crt with IP's: []
	I1206 09:09:20.114737  282948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.crt ...
	I1206 09:09:20.114767  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.crt: {Name:mkbcfc95f98f1773c4be79c05c7127ddc115dbf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:20.114933  282948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key ...
	I1206 09:09:20.114973  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key: {Name:mk0b166e1923f2d86c642f881c236f8f7c3af87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:20.115211  282948 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:20.115262  282948 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:20.115279  282948 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:20.115311  282948 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:20.115348  282948 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:20.115387  282948 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:20.115489  282948 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:20.116081  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:20.135208  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:20.154879  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:20.174151  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:20.194548  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1206 09:09:20.212475  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:09:20.236085  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:20.259246  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:09:20.282103  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:20.323869  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:20.344842  282948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:20.366251  282948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:20.380469  282948 ssh_runner.go:195] Run: openssl version
	I1206 09:09:20.387814  282948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:20.397226  282948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:20.405460  282948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:20.409292  282948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:20.409364  282948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:20.452045  282948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:20.462294  282948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:20.472208  282948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:20.482410  282948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:20.492191  282948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:20.496105  282948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:20.496168  282948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:20.545894  282948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:20.556224  282948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:09:20.564768  282948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:20.574458  282948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:20.584376  282948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:20.589151  282948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:20.589213  282948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:20.640705  282948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:20.650871  282948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:09:20.659357  282948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:20.663965  282948 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:09:20.664041  282948 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:20.664126  282948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:20.664177  282948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:20.702613  282948 cri.go:89] found id: ""
	I1206 09:09:20.702682  282948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:20.712835  282948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:09:20.721687  282948 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:09:20.721742  282948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:09:20.731141  282948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:09:20.731170  282948 kubeadm.go:158] found existing configuration files:
	
	I1206 09:09:20.731217  282948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1206 09:09:20.741821  282948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:09:20.741877  282948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:09:20.750490  282948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1206 09:09:20.759762  282948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:09:20.759816  282948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:09:20.770168  282948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1206 09:09:20.780190  282948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:09:20.780256  282948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:09:20.790567  282948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1206 09:09:20.802873  282948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:09:20.802929  282948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:09:20.814247  282948 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:09:20.870584  282948 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:09:20.870663  282948 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:09:20.903540  282948 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:09:20.903630  282948 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:09:20.903673  282948 kubeadm.go:319] OS: Linux
	I1206 09:09:20.903736  282948 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:09:20.903798  282948 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:09:20.903901  282948 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:09:20.903977  282948 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:09:20.904084  282948 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:09:20.904189  282948 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:09:20.904289  282948 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:09:20.904371  282948 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:09:20.991172  282948 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:09:20.991299  282948 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:09:20.991441  282948 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:09:21.000766  282948 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:09:21.000760  289573 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:21.000799  289573 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:21.000797  289573 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:21.000809  289573 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:21.001042  289573 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:21.001058  289573 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:09:21.001220  289573 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/config.json ...
	I1206 09:09:21.001252  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/config.json: {Name:mk00790cc22c8f7d90945e86284c86a8d9b80a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:21.023667  289573 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:21.023685  289573 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:21.023701  289573 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:21.023731  289573 start.go:360] acquireMachinesLock for auto-646473: {Name:mk218379a346dfe8fff847c1817d96b4db77f84e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:21.023843  289573 start.go:364] duration metric: took 89.755µs to acquireMachinesLock for "auto-646473"
	I1206 09:09:21.023872  289573 start.go:93] Provisioning new machine with config: &{Name:auto-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646473 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:21.023977  289573 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:09:20.483941  278230 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:09:20.488790  278230 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:09:20.488809  278230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:09:20.503138  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:09:20.760472  278230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:09:20.760547  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:20.760633  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-931091 minikube.k8s.io/updated_at=2025_12_06T09_09_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=embed-certs-931091 minikube.k8s.io/primary=true
	I1206 09:09:20.772421  278230 ops.go:34] apiserver oom_adj: -16
	I1206 09:09:20.861373  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:21.002738  282948 out.go:252]   - Generating certificates and keys ...
	I1206 09:09:21.002843  282948 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:09:21.002948  282948 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:09:21.216423  282948 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:09:21.837778  282948 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:09:22.093023  282948 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:09:22.123497  282948 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:09:22.362681  282948 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:09:22.363028  282948 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-213278 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:09:22.715567  282948 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:09:22.715752  282948 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-213278 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:09:19.775303  286725 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:09:20.241805  286725 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:20.264378  286725 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:09:20.264410  286725 kic_runner.go:114] Args: [docker exec --privileged newest-cni-718157 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:09:20.321864  286725 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:20.344225  286725 machine.go:94] provisionDockerMachine start ...
	I1206 09:09:20.344321  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:20.366246  286725 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:20.366601  286725 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1206 09:09:20.366625  286725 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:09:20.502470  286725 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-718157
	
	I1206 09:09:20.502496  286725 ubuntu.go:182] provisioning hostname "newest-cni-718157"
	I1206 09:09:20.502557  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:20.525087  286725 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:20.525409  286725 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1206 09:09:20.525434  286725 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-718157 && echo "newest-cni-718157" | sudo tee /etc/hostname
	I1206 09:09:20.677233  286725 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-718157
	
	I1206 09:09:20.677323  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:20.703622  286725 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:20.703913  286725 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1206 09:09:20.703934  286725 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718157' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718157/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718157' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:09:20.846723  286725 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:09:20.846752  286725 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:09:20.846776  286725 ubuntu.go:190] setting up certificates
	I1206 09:09:20.846786  286725 provision.go:84] configureAuth start
	I1206 09:09:20.846841  286725 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:20.879187  286725 provision.go:143] copyHostCerts
	I1206 09:09:20.879307  286725 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:09:20.879347  286725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:09:20.879478  286725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:09:20.879672  286725 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:09:20.879701  286725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:09:20.879772  286725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:09:20.879885  286725 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:09:20.879934  286725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:09:20.880021  286725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:09:20.880166  286725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718157 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-718157]
	I1206 09:09:20.971962  286725 provision.go:177] copyRemoteCerts
	I1206 09:09:20.972094  286725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:09:20.972183  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:20.994298  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:21.093408  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:09:21.118347  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:09:21.138039  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:09:21.156916  286725 provision.go:87] duration metric: took 310.120969ms to configureAuth
	I1206 09:09:21.156941  286725 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:09:21.157148  286725 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:21.157263  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:21.177365  286725 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:21.177658  286725 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1206 09:09:21.177687  286725 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:09:21.493630  286725 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:09:21.493655  286725 machine.go:97] duration metric: took 1.149404912s to provisionDockerMachine
	I1206 09:09:21.493667  286725 client.go:176] duration metric: took 6.440786585s to LocalClient.Create
	I1206 09:09:21.493683  286725 start.go:167] duration metric: took 6.440865099s to libmachine.API.Create "newest-cni-718157"
	I1206 09:09:21.493692  286725 start.go:293] postStartSetup for "newest-cni-718157" (driver="docker")
	I1206 09:09:21.493707  286725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:09:21.493781  286725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:09:21.493834  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:21.517953  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:21.627449  286725 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:09:21.631041  286725 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:09:21.631076  286725 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:09:21.631089  286725 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:09:21.631145  286725 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:09:21.631255  286725 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:09:21.631376  286725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:09:21.639942  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:21.664660  286725 start.go:296] duration metric: took 170.952283ms for postStartSetup
	I1206 09:09:21.665214  286725 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:21.686469  286725 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/config.json ...
	I1206 09:09:21.686756  286725 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:09:21.686807  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:21.705804  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:21.799349  286725 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:09:21.804228  286725 start.go:128] duration metric: took 6.753630833s to createHost
	I1206 09:09:21.804250  286725 start.go:83] releasing machines lock for "newest-cni-718157", held for 6.753794284s
	I1206 09:09:21.804318  286725 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:21.827714  286725 ssh_runner.go:195] Run: cat /version.json
	I1206 09:09:21.827771  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:21.827815  286725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:09:21.827902  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:21.850449  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:21.851617  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:21.947773  286725 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:22.015751  286725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:09:22.053466  286725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:09:22.058244  286725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:09:22.058310  286725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:09:22.085932  286725 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:09:22.085960  286725 start.go:496] detecting cgroup driver to use...
	I1206 09:09:22.086024  286725 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:09:22.086093  286725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:09:22.104202  286725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:09:22.116897  286725 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:09:22.116959  286725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:09:22.134719  286725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:09:22.154084  286725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:09:22.249670  286725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:09:22.346571  286725 docker.go:234] disabling docker service ...
	I1206 09:09:22.346645  286725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:09:22.367322  286725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:09:22.381242  286725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:09:22.479843  286725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:09:22.569814  286725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:09:22.582836  286725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:09:22.597789  286725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:09:22.597853  286725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.610165  286725 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:09:22.610233  286725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.619593  286725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.628379  286725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.638499  286725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:09:22.646705  286725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.655284  286725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.669757  286725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:22.679078  286725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:09:22.686897  286725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:09:22.695166  286725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:22.798369  286725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:09:24.880864  286725 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.082455254s)
	I1206 09:09:24.880919  286725 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:09:24.880971  286725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:09:24.886468  286725 start.go:564] Will wait 60s for crictl version
	I1206 09:09:24.886580  286725 ssh_runner.go:195] Run: which crictl
	I1206 09:09:24.890737  286725 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:09:24.922838  286725 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:09:24.922922  286725 ssh_runner.go:195] Run: crio --version
	I1206 09:09:24.961215  286725 ssh_runner.go:195] Run: crio --version
	I1206 09:09:25.007530  286725 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 09:09:25.010246  286725 cli_runner.go:164] Run: docker network inspect newest-cni-718157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:25.030880  286725 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:25.036084  286725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:25.049957  286725 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:09:22.958635  282948 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:09:23.272525  282948 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:09:23.452442  282948 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:09:23.452551  282948 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:09:23.891397  282948 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:09:24.291735  282948 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:09:24.682727  282948 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:09:24.771650  282948 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:09:25.194128  282948 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:09:25.194935  282948 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:09:25.200653  282948 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:09:21.026579  289573 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:09:21.026759  289573 start.go:159] libmachine.API.Create for "auto-646473" (driver="docker")
	I1206 09:09:21.026790  289573 client.go:173] LocalClient.Create starting
	I1206 09:09:21.026867  289573 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:09:21.026903  289573 main.go:143] libmachine: Decoding PEM data...
	I1206 09:09:21.026921  289573 main.go:143] libmachine: Parsing certificate...
	I1206 09:09:21.026968  289573 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:09:21.026999  289573 main.go:143] libmachine: Decoding PEM data...
	I1206 09:09:21.027018  289573 main.go:143] libmachine: Parsing certificate...
	I1206 09:09:21.027345  289573 cli_runner.go:164] Run: docker network inspect auto-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:09:21.045012  289573 cli_runner.go:211] docker network inspect auto-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:09:21.045082  289573 network_create.go:284] running [docker network inspect auto-646473] to gather additional debugging logs...
	I1206 09:09:21.045102  289573 cli_runner.go:164] Run: docker network inspect auto-646473
	W1206 09:09:21.062365  289573 cli_runner.go:211] docker network inspect auto-646473 returned with exit code 1
	I1206 09:09:21.062405  289573 network_create.go:287] error running [docker network inspect auto-646473]: docker network inspect auto-646473: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-646473 not found
	I1206 09:09:21.062418  289573 network_create.go:289] output of [docker network inspect auto-646473]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-646473 not found
	
	** /stderr **
	I1206 09:09:21.062496  289573 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:21.080073  289573 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:09:21.080753  289573 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:09:21.081488  289573 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:09:21.082283  289573 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df44f0}
	I1206 09:09:21.082303  289573 network_create.go:124] attempt to create docker network auto-646473 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1206 09:09:21.082341  289573 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-646473 auto-646473
	I1206 09:09:21.134952  289573 network_create.go:108] docker network auto-646473 192.168.76.0/24 created
	I1206 09:09:21.134998  289573 kic.go:121] calculated static IP "192.168.76.2" for the "auto-646473" container
	I1206 09:09:21.135062  289573 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:09:21.154707  289573 cli_runner.go:164] Run: docker volume create auto-646473 --label name.minikube.sigs.k8s.io=auto-646473 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:09:21.176468  289573 oci.go:103] Successfully created a docker volume auto-646473
	I1206 09:09:21.176568  289573 cli_runner.go:164] Run: docker run --rm --name auto-646473-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-646473 --entrypoint /usr/bin/test -v auto-646473:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:09:21.600330  289573 oci.go:107] Successfully prepared a docker volume auto-646473
	I1206 09:09:21.600413  289573 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:21.600424  289573 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:09:21.600477  289573 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:09:24.755636  289573 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.15510918s)
	I1206 09:09:24.755668  289573 kic.go:203] duration metric: took 3.155240515s to extract preloaded images to volume ...
	W1206 09:09:24.755743  289573 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:09:24.755776  289573 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:09:24.755812  289573 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:09:24.819132  289573 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-646473 --name auto-646473 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-646473 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-646473 --network auto-646473 --ip 192.168.76.2 --volume auto-646473:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:09:25.141328  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Running}}
	I1206 09:09:25.166120  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:25.188715  289573 cli_runner.go:164] Run: docker exec auto-646473 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:09:25.239275  289573 oci.go:144] the created container "auto-646473" has a running status.
	I1206 09:09:25.239306  289573 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa...
	I1206 09:09:25.303696  289573 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:09:25.346412  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:25.371295  289573 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:09:25.371318  289573 kic_runner.go:114] Args: [docker exec --privileged auto-646473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:09:25.429429  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:25.457828  289573 machine.go:94] provisionDockerMachine start ...
	I1206 09:09:25.457919  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:25.485347  289573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:25.485730  289573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:09:25.485748  289573 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:09:25.486458  289573 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48550->127.0.0.1:33093: read: connection reset by peer
	I1206 09:09:21.362235  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:21.862200  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:22.362207  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:22.861629  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:23.361957  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:23.862022  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:24.361735  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:24.862249  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:25.362100  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:25.861499  278230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:25.955431  278230 kubeadm.go:1114] duration metric: took 5.194943576s to wait for elevateKubeSystemPrivileges
	I1206 09:09:25.955566  278230 kubeadm.go:403] duration metric: took 19.97082781s to StartCluster
	I1206 09:09:25.955597  278230 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.955753  278230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:25.957024  278230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.957287  278230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:25.957421  278230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:09:25.957496  278230 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:25.957568  278230 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-931091"
	I1206 09:09:25.957586  278230 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-931091"
	I1206 09:09:25.957612  278230 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:09:25.957611  278230 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:25.957656  278230 addons.go:70] Setting default-storageclass=true in profile "embed-certs-931091"
	I1206 09:09:25.957693  278230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-931091"
	I1206 09:09:25.958165  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:09:25.958231  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:09:25.959301  278230 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:25.962181  278230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:25.989417  278230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:25.990166  278230 addons.go:239] Setting addon default-storageclass=true in "embed-certs-931091"
	I1206 09:09:25.990220  278230 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:09:25.990645  278230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:25.990664  278230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:25.990675  278230 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:09:25.990715  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:26.020845  278230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:26.020889  278230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:26.020890  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:26.020962  278230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:09:26.048088  278230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:09:26.067265  278230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:09:26.139892  278230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:26.144628  278230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:26.160659  278230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:26.262386  278230 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1206 09:09:26.464810  278230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-931091" to be "Ready" ...
	I1206 09:09:26.470873  278230 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:09:25.051528  286725 kubeadm.go:884] updating cluster {Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:25.051707  286725 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:25.051774  286725 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:25.092963  286725 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:25.092982  286725 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:25.093047  286725 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:25.125102  286725 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:25.125133  286725 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:25.125144  286725 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:09:25.125278  286725 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718157 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:25.125386  286725 ssh_runner.go:195] Run: crio config
	I1206 09:09:25.194122  286725 cni.go:84] Creating CNI manager for ""
	I1206 09:09:25.194155  286725 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:25.194175  286725 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:09:25.194205  286725 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718157 NodeName:newest-cni-718157 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:25.194357  286725 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718157"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:25.194429  286725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:09:25.205393  286725 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:25.205467  286725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:25.215264  286725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:09:25.230889  286725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:09:25.253597  286725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1206 09:09:25.280124  286725 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:25.286339  286725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:25.300054  286725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:25.420431  286725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:25.452676  286725 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157 for IP: 192.168.94.2
	I1206 09:09:25.452700  286725 certs.go:195] generating shared ca certs ...
	I1206 09:09:25.452719  286725 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.452871  286725 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:25.452954  286725 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:25.452970  286725 certs.go:257] generating profile certs ...
	I1206 09:09:25.453060  286725 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.key
	I1206 09:09:25.453082  286725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.crt with IP's: []
	I1206 09:09:25.510794  286725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.crt ...
	I1206 09:09:25.510821  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.crt: {Name:mka066d28a642d6e07de1582c4e62e5dc31ea693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.510979  286725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.key ...
	I1206 09:09:25.511002  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.key: {Name:mkfc78f3390adf2480502df8a76f4336b6886d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.511107  286725 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key.5210bb9f
	I1206 09:09:25.511130  286725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt.5210bb9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:09:25.584656  286725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt.5210bb9f ...
	I1206 09:09:25.584756  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt.5210bb9f: {Name:mk804decbf7b93507e45e20caeb7eec9033f2aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.584948  286725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key.5210bb9f ...
	I1206 09:09:25.584968  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key.5210bb9f: {Name:mkb22adbe0000ebfd0062adb546ba75a2b54b9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.585100  286725 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt.5210bb9f -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt
	I1206 09:09:25.585205  286725 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key.5210bb9f -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key
	I1206 09:09:25.585289  286725 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key
	I1206 09:09:25.585323  286725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.crt with IP's: []
	I1206 09:09:25.746688  286725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.crt ...
	I1206 09:09:25.746724  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.crt: {Name:mk7e238dbc090190aeda308ca623148d56ed13a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.746939  286725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key ...
	I1206 09:09:25.746965  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key: {Name:mk565eead9766ad6c2dbb48373150abb7e3e07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:25.747257  286725 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:25.747316  286725 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:25.747335  286725 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:25.747380  286725 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:25.747431  286725 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:25.747469  286725 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:25.747541  286725 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:25.748326  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:25.773659  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:25.795135  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:25.814962  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:25.833231  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:09:25.851160  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:09:25.877034  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:25.903961  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:09:25.926655  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:25.957563  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:25.986983  286725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:26.011312  286725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:26.039440  286725 ssh_runner.go:195] Run: openssl version
	I1206 09:09:26.048853  286725 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:26.059699  286725 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:26.070001  286725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:26.075072  286725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:26.075152  286725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:26.135699  286725 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:26.146816  286725 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:26.156626  286725 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:26.167262  286725 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:26.177836  286725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:26.183101  286725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:26.183199  286725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:26.238047  286725 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:26.249647  286725 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:09:26.258950  286725 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:26.268730  286725 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:26.279384  286725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:26.284358  286725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:26.284413  286725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:26.329136  286725 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:26.337553  286725 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:09:26.345237  286725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:26.349264  286725 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:09:26.349334  286725 kubeadm.go:401] StartCluster: {Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:26.349414  286725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:26.349469  286725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:26.378793  286725 cri.go:89] found id: ""
	I1206 09:09:26.378864  286725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:26.387999  286725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:09:26.399326  286725 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:09:26.399403  286725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:09:26.409819  286725 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:09:26.409839  286725 kubeadm.go:158] found existing configuration files:
	
	I1206 09:09:26.409901  286725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:09:26.420886  286725 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:09:26.420945  286725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:09:26.430314  286725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:09:26.441918  286725 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:09:26.441984  286725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:09:26.451038  286725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:09:26.461980  286725 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:09:26.462072  286725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:09:26.470557  286725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:09:26.481194  286725 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:09:26.481252  286725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:09:26.488896  286725 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:09:26.541391  286725 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:09:26.541479  286725 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:09:26.618919  286725 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:09:26.619023  286725 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:09:26.619066  286725 kubeadm.go:319] OS: Linux
	I1206 09:09:26.619123  286725 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:09:26.619188  286725 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:09:26.619266  286725 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:09:26.619332  286725 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:09:26.619402  286725 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:09:26.619464  286725 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:09:26.619525  286725 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:09:26.619582  286725 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:09:26.684256  286725 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:09:26.684408  286725 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:09:26.684550  286725 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:09:26.692165  286725 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:09:25.202381  282948 out.go:252]   - Booting up control plane ...
	I1206 09:09:25.202517  282948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:09:25.202611  282948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:09:25.203628  282948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:09:25.220640  282948 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:09:25.220800  282948 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:09:25.228452  282948 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:09:25.228721  282948 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:09:25.228778  282948 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:09:25.373518  282948 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:09:25.373712  282948 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:09:26.375090  282948 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0017815s
	I1206 09:09:26.378488  282948 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:09:26.378610  282948 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1206 09:09:26.378759  282948 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:09:26.378891  282948 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:09:27.531153  282948 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.1525528s
	I1206 09:09:26.696146  286725 out.go:252]   - Generating certificates and keys ...
	I1206 09:09:26.696271  286725 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:09:26.696365  286725 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:09:26.737105  286725 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:09:26.829292  286725 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:09:26.962735  286725 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:09:27.131177  286725 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:09:27.201091  286725 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:09:27.201274  286725 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-718157] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:09:27.310151  286725 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:09:27.310329  286725 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-718157] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:09:27.370694  286725 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:09:27.534686  286725 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:09:27.591857  286725 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:09:27.591971  286725 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:09:27.670752  286725 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:09:27.746297  286725 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:09:27.798790  286725 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:09:27.850572  286725 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:09:27.932077  286725 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:09:27.932917  286725 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:09:27.937202  286725 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:09:27.938895  286725 out.go:252]   - Booting up control plane ...
	I1206 09:09:27.939060  286725 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:09:27.939172  286725 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:09:27.941095  286725 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:09:27.956866  286725 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:09:27.957049  286725 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:09:27.965240  286725 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:09:27.965643  286725 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:09:27.965738  286725 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:09:28.090768  286725 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:09:28.090946  286725 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:09:28.592311  286725 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.671074ms
	I1206 09:09:28.596419  286725 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:09:28.596580  286725 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:09:28.596688  286725 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:09:28.596757  286725 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:09:29.107186  286725 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 509.948784ms
	I1206 09:09:28.535801  282948 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.157280869s
	I1206 09:09:30.380564  282948 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001996742s
	I1206 09:09:30.403169  282948 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:09:30.421641  282948 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:09:30.436283  282948 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:09:30.436562  282948 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-213278 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:09:30.449898  282948 kubeadm.go:319] [bootstrap-token] Using token: lfp0e6.jtukaysigo1y7ler
	I1206 09:09:28.631048  289573 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-646473
	
	I1206 09:09:28.631072  289573 ubuntu.go:182] provisioning hostname "auto-646473"
	I1206 09:09:28.631142  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:28.650940  289573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:28.651192  289573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:09:28.651206  289573 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-646473 && echo "auto-646473" | sudo tee /etc/hostname
	I1206 09:09:28.801733  289573 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-646473
	
	I1206 09:09:28.801817  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:28.826241  289573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:28.826632  289573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:09:28.826659  289573 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-646473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-646473/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-646473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:09:28.979524  289573 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:09:28.979568  289573 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:09:28.979629  289573 ubuntu.go:190] setting up certificates
	I1206 09:09:28.979641  289573 provision.go:84] configureAuth start
	I1206 09:09:28.979700  289573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-646473
	I1206 09:09:29.002063  289573 provision.go:143] copyHostCerts
	I1206 09:09:29.002131  289573 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:09:29.002145  289573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:09:29.002224  289573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:09:29.002341  289573 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:09:29.002352  289573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:09:29.002396  289573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:09:29.002488  289573 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:09:29.002508  289573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:09:29.002544  289573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:09:29.002614  289573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.auto-646473 san=[127.0.0.1 192.168.76.2 auto-646473 localhost minikube]
	I1206 09:09:29.088846  289573 provision.go:177] copyRemoteCerts
	I1206 09:09:29.088917  289573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:09:29.088958  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:29.110674  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:29.206749  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:09:29.228679  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:09:29.247395  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1206 09:09:29.264699  289573 provision.go:87] duration metric: took 285.039478ms to configureAuth
	I1206 09:09:29.264729  289573 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:09:29.264892  289573 config.go:182] Loaded profile config "auto-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:29.265018  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:29.283754  289573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:29.283960  289573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:09:29.283975  289573 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:09:29.563040  289573 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:09:29.563075  289573 machine.go:97] duration metric: took 4.105227537s to provisionDockerMachine
	I1206 09:09:29.563087  289573 client.go:176] duration metric: took 8.536289835s to LocalClient.Create
	I1206 09:09:29.563110  289573 start.go:167] duration metric: took 8.53634989s to libmachine.API.Create "auto-646473"
	I1206 09:09:29.563119  289573 start.go:293] postStartSetup for "auto-646473" (driver="docker")
	I1206 09:09:29.563131  289573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:09:29.563223  289573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:09:29.563271  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:29.586397  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:29.696259  289573 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:09:29.700684  289573 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:09:29.700719  289573 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:09:29.700733  289573 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:09:29.700815  289573 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:09:29.700942  289573 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:09:29.701099  289573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:09:29.711659  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:29.754160  289573 start.go:296] duration metric: took 191.013997ms for postStartSetup
	I1206 09:09:29.754796  289573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-646473
	I1206 09:09:29.778576  289573 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/config.json ...
	I1206 09:09:29.778892  289573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:09:29.778947  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:29.803786  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:29.905640  289573 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:09:29.911394  289573 start.go:128] duration metric: took 8.887401282s to createHost
	I1206 09:09:29.911431  289573 start.go:83] releasing machines lock for "auto-646473", held for 8.887573509s
	I1206 09:09:29.911511  289573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-646473
	I1206 09:09:29.935520  289573 ssh_runner.go:195] Run: cat /version.json
	I1206 09:09:29.935645  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:29.935539  289573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:09:29.935807  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:29.960272  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:29.960468  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:30.135868  289573 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:30.143817  289573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:09:30.182202  289573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:09:30.187282  289573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:09:30.187354  289573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:09:30.223670  289573 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:09:30.223693  289573 start.go:496] detecting cgroup driver to use...
	I1206 09:09:30.223729  289573 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:09:30.223788  289573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:09:30.241898  289573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:09:30.254965  289573 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:09:30.255049  289573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:09:30.272085  289573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:09:30.293225  289573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:09:30.376641  289573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:09:30.484884  289573 docker.go:234] disabling docker service ...
	I1206 09:09:30.484951  289573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:09:30.503959  289573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:09:30.521581  289573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:09:30.660965  289573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:09:30.764742  289573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:09:26.472046  278230 addons.go:530] duration metric: took 514.545113ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:09:26.767562  278230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-931091" context rescaled to 1 replicas
	W1206 09:09:28.468869  278230 node_ready.go:57] node "embed-certs-931091" has "Ready":"False" status (will retry)
	W1206 09:09:30.968038  278230 node_ready.go:57] node "embed-certs-931091" has "Ready":"False" status (will retry)
	I1206 09:09:30.780755  289573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:09:30.800549  289573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:09:30.800705  289573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.813748  289573 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:09:30.813808  289573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.824035  289573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.837223  289573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.847078  289573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:09:30.855491  289573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.865386  289573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.879707  289573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:30.888542  289573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:09:30.896067  289573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:09:30.903224  289573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:30.992675  289573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:09:31.148783  289573 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:09:31.148843  289573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:09:31.153037  289573 start.go:564] Will wait 60s for crictl version
	I1206 09:09:31.153086  289573 ssh_runner.go:195] Run: which crictl
	I1206 09:09:31.156904  289573 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:09:31.185736  289573 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:09:31.185818  289573 ssh_runner.go:195] Run: crio --version
	I1206 09:09:31.217087  289573 ssh_runner.go:195] Run: crio --version
	I1206 09:09:31.246713  289573 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:09:30.452065  282948 out.go:252]   - Configuring RBAC rules ...
	I1206 09:09:30.452224  282948 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:09:30.455545  282948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:09:30.462556  282948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:09:30.465316  282948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:09:30.468692  282948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:09:30.471309  282948 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:09:30.788857  282948 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:09:31.202853  282948 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:09:31.788154  282948 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:09:31.789327  282948 kubeadm.go:319] 
	I1206 09:09:31.789427  282948 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:09:31.789441  282948 kubeadm.go:319] 
	I1206 09:09:31.789544  282948 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:09:31.789557  282948 kubeadm.go:319] 
	I1206 09:09:31.789629  282948 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:09:31.789708  282948 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:09:31.789767  282948 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:09:31.789773  282948 kubeadm.go:319] 
	I1206 09:09:31.789845  282948 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:09:31.789870  282948 kubeadm.go:319] 
	I1206 09:09:31.789936  282948 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:09:31.789948  282948 kubeadm.go:319] 
	I1206 09:09:31.790033  282948 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:09:31.790162  282948 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:09:31.790273  282948 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:09:31.790284  282948 kubeadm.go:319] 
	I1206 09:09:31.790406  282948 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:09:31.790525  282948 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:09:31.790536  282948 kubeadm.go:319] 
	I1206 09:09:31.790648  282948 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token lfp0e6.jtukaysigo1y7ler \
	I1206 09:09:31.790793  282948 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:09:31.790823  282948 kubeadm.go:319] 	--control-plane 
	I1206 09:09:31.790830  282948 kubeadm.go:319] 
	I1206 09:09:31.790944  282948 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:09:31.790956  282948 kubeadm.go:319] 
	I1206 09:09:31.791079  282948 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token lfp0e6.jtukaysigo1y7ler \
	I1206 09:09:31.791215  282948 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:09:31.794805  282948 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:09:31.795022  282948 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:09:31.795062  282948 cni.go:84] Creating CNI manager for ""
	I1206 09:09:31.795072  282948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:31.796761  282948 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:09:30.694557  286725 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.098106614s
	I1206 09:09:32.598740  286725 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002347072s
	I1206 09:09:32.620108  286725 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:09:32.632983  286725 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:09:32.643343  286725 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:09:32.643660  286725 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-718157 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:09:32.652599  286725 kubeadm.go:319] [bootstrap-token] Using token: 5zwt8u.zbvhygchk4val2oz
	I1206 09:09:31.797689  282948 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:09:31.802544  282948 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:09:31.802565  282948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:09:31.818679  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:09:32.096300  282948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:09:32.096372  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:32.096422  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-213278 minikube.k8s.io/updated_at=2025_12_06T09_09_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=default-k8s-diff-port-213278 minikube.k8s.io/primary=true
	I1206 09:09:32.110024  282948 ops.go:34] apiserver oom_adj: -16
	I1206 09:09:32.193629  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:32.693880  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:32.654056  286725 out.go:252]   - Configuring RBAC rules ...
	I1206 09:09:32.654220  286725 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:09:32.657752  286725 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:09:32.664660  286725 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:09:32.667686  286725 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:09:32.670579  286725 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:09:32.673344  286725 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:09:33.005363  286725 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:09:33.420897  286725 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:09:34.004429  286725 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:09:34.005424  286725 kubeadm.go:319] 
	I1206 09:09:34.005555  286725 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:09:34.005576  286725 kubeadm.go:319] 
	I1206 09:09:34.005702  286725 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:09:34.005720  286725 kubeadm.go:319] 
	I1206 09:09:34.005756  286725 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:09:34.005833  286725 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:09:34.005904  286725 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:09:34.005915  286725 kubeadm.go:319] 
	I1206 09:09:34.005983  286725 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:09:34.006018  286725 kubeadm.go:319] 
	I1206 09:09:34.006097  286725 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:09:34.006117  286725 kubeadm.go:319] 
	I1206 09:09:34.006206  286725 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:09:34.006322  286725 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:09:34.006406  286725 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:09:34.006416  286725 kubeadm.go:319] 
	I1206 09:09:34.006524  286725 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:09:34.006649  286725 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:09:34.006663  286725 kubeadm.go:319] 
	I1206 09:09:34.006789  286725 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5zwt8u.zbvhygchk4val2oz \
	I1206 09:09:34.006934  286725 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:09:34.006972  286725 kubeadm.go:319] 	--control-plane 
	I1206 09:09:34.006978  286725 kubeadm.go:319] 
	I1206 09:09:34.007103  286725 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:09:34.007120  286725 kubeadm.go:319] 
	I1206 09:09:34.007241  286725 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5zwt8u.zbvhygchk4val2oz \
	I1206 09:09:34.007382  286725 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:09:34.009774  286725 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:09:34.009929  286725 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:09:34.009955  286725 cni.go:84] Creating CNI manager for ""
	I1206 09:09:34.009964  286725 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:34.011654  286725 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:09:34.013011  286725 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:09:34.017433  286725 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1206 09:09:34.017450  286725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:09:34.031331  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:09:34.260579  286725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:09:34.260666  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-718157 minikube.k8s.io/updated_at=2025_12_06T09_09_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=newest-cni-718157 minikube.k8s.io/primary=true
	I1206 09:09:34.260743  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:34.274052  286725 ops.go:34] apiserver oom_adj: -16
	I1206 09:09:34.351190  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:31.247839  289573 cli_runner.go:164] Run: docker network inspect auto-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:31.266289  289573 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:31.270391  289573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:31.281129  289573 kubeadm.go:884] updating cluster {Name:auto-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:31.281287  289573 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:09:31.281353  289573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:31.312970  289573 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:31.313002  289573 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:31.313057  289573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:31.338450  289573 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:31.338475  289573 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:31.338484  289573 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1206 09:09:31.338565  289573 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-646473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:31.338630  289573 ssh_runner.go:195] Run: crio config
	I1206 09:09:31.383682  289573 cni.go:84] Creating CNI manager for ""
	I1206 09:09:31.383714  289573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:31.383736  289573 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:09:31.383770  289573 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-646473 NodeName:auto-646473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:31.383925  289573 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-646473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:31.384029  289573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:09:31.392892  289573 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:31.392957  289573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:31.400796  289573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1206 09:09:31.414540  289573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:09:31.429431  289573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1206 09:09:31.442117  289573 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:31.445658  289573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:31.455216  289573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:31.546581  289573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:31.575643  289573 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473 for IP: 192.168.76.2
	I1206 09:09:31.575661  289573 certs.go:195] generating shared ca certs ...
	I1206 09:09:31.575676  289573 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.575830  289573 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:31.575893  289573 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:31.575907  289573 certs.go:257] generating profile certs ...
	I1206 09:09:31.575966  289573 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/client.key
	I1206 09:09:31.575983  289573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/client.crt with IP's: []
	I1206 09:09:31.641662  289573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/client.crt ...
	I1206 09:09:31.641692  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/client.crt: {Name:mk03a25f19dc93dacac1a487e0bbae7437bed847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.641891  289573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/client.key ...
	I1206 09:09:31.641911  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/client.key: {Name:mk715bc476371b294bbd778b1193407fb7a942a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.642080  289573 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.key.aeaed7ff
	I1206 09:09:31.642106  289573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.crt.aeaed7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1206 09:09:31.707468  289573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.crt.aeaed7ff ...
	I1206 09:09:31.707504  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.crt.aeaed7ff: {Name:mke5276efbd34fe2f78730655706d542adcabe10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.707704  289573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.key.aeaed7ff ...
	I1206 09:09:31.707728  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.key.aeaed7ff: {Name:mk696cba390128cbc85ee10ef5dec4e0bfd803db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.707864  289573 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.crt.aeaed7ff -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.crt
	I1206 09:09:31.708015  289573 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.key.aeaed7ff -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.key
	I1206 09:09:31.708119  289573 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.key
	I1206 09:09:31.708229  289573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.crt with IP's: []
	I1206 09:09:31.912878  289573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.crt ...
	I1206 09:09:31.912915  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.crt: {Name:mk7fae5f5f8982daead3bc584b1171779cd4e1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.913139  289573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.key ...
	I1206 09:09:31.913165  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.key: {Name:mk3dfcd51714b7e5b5e158fc22176a80d9c0ad88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:31.913453  289573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:31.913508  289573 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:31.913524  289573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:31.913562  289573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:31.913596  289573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:31.913633  289573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:31.913695  289573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:31.914527  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:31.940070  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:31.963380  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:31.987560  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:32.010712  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1206 09:09:32.034271  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:09:32.057832  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:32.095809  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/auto-646473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:09:32.123056  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:32.148914  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:32.175599  289573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:32.200133  289573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:32.216903  289573 ssh_runner.go:195] Run: openssl version
	I1206 09:09:32.224611  289573 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:32.234576  289573 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:32.244195  289573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:32.249102  289573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:32.249164  289573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:32.291854  289573 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:32.299585  289573 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:32.308101  289573 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:32.316236  289573 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:32.324407  289573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:32.328356  289573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:32.328409  289573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:32.366843  289573 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:32.374734  289573 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:09:32.382412  289573 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:32.389590  289573 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:32.397048  289573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:32.400675  289573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:32.400722  289573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:32.436784  289573 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:32.444897  289573 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:09:32.452784  289573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:32.456615  289573 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:09:32.456681  289573 kubeadm.go:401] StartCluster: {Name:auto-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:32.456763  289573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:32.456822  289573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:32.500065  289573 cri.go:89] found id: ""
	I1206 09:09:32.500144  289573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:32.512279  289573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:09:32.522310  289573 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:09:32.522382  289573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:09:32.530857  289573 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:09:32.530889  289573 kubeadm.go:158] found existing configuration files:
	
	I1206 09:09:32.530937  289573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:09:32.538648  289573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:09:32.538704  289573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:09:32.545784  289573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:09:32.553218  289573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:09:32.553267  289573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:09:32.561136  289573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:09:32.568640  289573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:09:32.568691  289573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:09:32.575742  289573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:09:32.583018  289573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:09:32.583074  289573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:09:32.590796  289573 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:09:32.660517  289573 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:09:32.735407  289573 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1206 09:09:32.968526  278230 node_ready.go:57] node "embed-certs-931091" has "Ready":"False" status (will retry)
	W1206 09:09:35.468718  278230 node_ready.go:57] node "embed-certs-931091" has "Ready":"False" status (will retry)
	I1206 09:09:33.194218  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:33.694177  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:34.193773  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:34.693662  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:35.194257  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:35.694187  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:36.194533  282948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:36.270430  282948 kubeadm.go:1114] duration metric: took 4.174119197s to wait for elevateKubeSystemPrivileges
	I1206 09:09:36.270468  282948 kubeadm.go:403] duration metric: took 15.606430649s to StartCluster
	I1206 09:09:36.270494  282948 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:36.270568  282948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:36.271900  282948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:36.272193  282948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:09:36.272204  282948 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:36.272280  282948 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:36.272381  282948 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:36.272387  282948 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-213278"
	I1206 09:09:36.272427  282948 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-213278"
	I1206 09:09:36.272379  282948 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-213278"
	I1206 09:09:36.272507  282948 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-213278"
	I1206 09:09:36.272546  282948 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:09:36.272778  282948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:09:36.273037  282948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:09:36.274052  282948 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:36.275624  282948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:36.300972  282948 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:36.301777  282948 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-213278"
	I1206 09:09:36.301824  282948 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:09:36.302352  282948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:09:36.302772  282948 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:36.302792  282948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:36.302841  282948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:09:36.335376  282948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:09:36.336600  282948 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:36.336622  282948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:36.336801  282948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:09:36.361300  282948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:09:36.382500  282948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:09:36.438605  282948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:36.452073  282948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:36.482901  282948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:36.609698  282948 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1206 09:09:36.611136  282948 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-213278" to be "Ready" ...
	I1206 09:09:36.808735  282948 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:09:36.809810  282948 addons.go:530] duration metric: took 537.529524ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:09:37.114648  282948 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-213278" context rescaled to 1 replicas
	I1206 09:09:34.851760  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:35.351383  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:35.851757  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:36.352184  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:36.851253  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:37.351663  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:37.852246  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:38.351719  286725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:38.424053  286725 kubeadm.go:1114] duration metric: took 4.163336188s to wait for elevateKubeSystemPrivileges
	I1206 09:09:38.424091  286725 kubeadm.go:403] duration metric: took 12.074761268s to StartCluster
	I1206 09:09:38.424114  286725 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:38.424188  286725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:38.425759  286725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:38.426036  286725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:09:38.426041  286725 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:38.426074  286725 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:38.426232  286725 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-718157"
	I1206 09:09:38.426250  286725 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-718157"
	I1206 09:09:38.426278  286725 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:38.426276  286725 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:38.426283  286725 addons.go:70] Setting default-storageclass=true in profile "newest-cni-718157"
	I1206 09:09:38.426305  286725 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718157"
	I1206 09:09:38.426696  286725 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:38.426851  286725 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:38.427545  286725 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:38.429109  286725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:38.453207  286725 addons.go:239] Setting addon default-storageclass=true in "newest-cni-718157"
	I1206 09:09:38.453242  286725 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:38.453343  286725 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:38.453631  286725 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:38.454689  286725 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:38.454709  286725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:38.454781  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:38.486611  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:38.486657  286725 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:38.486778  286725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:38.486843  286725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:38.509799  286725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:38.528436  286725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:09:38.580253  286725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:38.608048  286725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:38.619487  286725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:38.723967  286725 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:09:38.726467  286725 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:09:38.726526  286725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:09:38.934718  286725 api_server.go:72] duration metric: took 508.556596ms to wait for apiserver process to appear ...
	I1206 09:09:38.934749  286725 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:09:38.934802  286725 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:38.940281  286725 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:09:38.941172  286725 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:09:38.941194  286725 api_server.go:131] duration metric: took 6.437682ms to wait for apiserver health ...
	I1206 09:09:38.941202  286725 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:09:38.941435  286725 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:09:38.942894  286725 addons.go:530] duration metric: took 516.817446ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:09:38.943830  286725 system_pods.go:59] 8 kube-system pods found
	I1206 09:09:38.943866  286725 system_pods.go:61] "coredns-7d764666f9-4xnvs" [56b811f4-2c33-47ae-a18e-91bf00c91dda] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:38.943879  286725 system_pods.go:61] "etcd-newest-cni-718157" [3d942387-01d6-4fd9-a474-258befcbde87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:09:38.943894  286725 system_pods.go:61] "kindnet-6q6w2" [740bbe6b-e50c-4cf4-b593-5f871820515c] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:09:38.943903  286725 system_pods.go:61] "kube-apiserver-newest-cni-718157" [488d8c89-4121-4c74-9433-c14123aa9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:09:38.943912  286725 system_pods.go:61] "kube-controller-manager-newest-cni-718157" [f5fd9d31-9322-4da5-8e82-8e20ae26ca00] Running
	I1206 09:09:38.943924  286725 system_pods.go:61] "kube-proxy-46zxv" [13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:09:38.943931  286725 system_pods.go:61] "kube-scheduler-newest-cni-718157" [a256efa5-856a-4103-b3b9-397143dc1894] Running
	I1206 09:09:38.943936  286725 system_pods.go:61] "storage-provisioner" [72d40874-81fd-421f-95e4-7f8b2380f340] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:38.943944  286725 system_pods.go:74] duration metric: took 2.736689ms to wait for pod list to return data ...
	I1206 09:09:38.943953  286725 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:09:38.945968  286725 default_sa.go:45] found service account: "default"
	I1206 09:09:38.946000  286725 default_sa.go:55] duration metric: took 2.022303ms for default service account to be created ...
	I1206 09:09:38.946014  286725 kubeadm.go:587] duration metric: took 519.855666ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:09:38.946041  286725 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:09:38.948258  286725 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:09:38.948287  286725 node_conditions.go:123] node cpu capacity is 8
	I1206 09:09:38.948304  286725 node_conditions.go:105] duration metric: took 2.25666ms to run NodePressure ...
	I1206 09:09:38.948319  286725 start.go:242] waiting for startup goroutines ...
	I1206 09:09:39.229970  286725 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-718157" context rescaled to 1 replicas
	I1206 09:09:39.230042  286725 start.go:247] waiting for cluster config update ...
	I1206 09:09:39.230057  286725 start.go:256] writing updated cluster config ...
	I1206 09:09:39.230353  286725 ssh_runner.go:195] Run: rm -f paused
	I1206 09:09:39.290397  286725 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:09:39.291560  286725 out.go:179] * Done! kubectl is now configured to use "newest-cni-718157" cluster and "default" namespace by default
	I1206 09:09:36.969442  278230 node_ready.go:49] node "embed-certs-931091" is "Ready"
	I1206 09:09:36.969477  278230 node_ready.go:38] duration metric: took 10.504627489s for node "embed-certs-931091" to be "Ready" ...
	I1206 09:09:36.969492  278230 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:09:36.969545  278230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:09:36.986741  278230 api_server.go:72] duration metric: took 11.029418518s to wait for apiserver process to appear ...
	I1206 09:09:36.986770  278230 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:09:36.986793  278230 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:09:36.991453  278230 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1206 09:09:36.992651  278230 api_server.go:141] control plane version: v1.34.2
	I1206 09:09:36.992681  278230 api_server.go:131] duration metric: took 5.902544ms to wait for apiserver health ...
	I1206 09:09:36.992692  278230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:09:36.996787  278230 system_pods.go:59] 8 kube-system pods found
	I1206 09:09:36.996817  278230 system_pods.go:61] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:09:36.996822  278230 system_pods.go:61] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running
	I1206 09:09:36.996829  278230 system_pods.go:61] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running
	I1206 09:09:36.996832  278230 system_pods.go:61] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running
	I1206 09:09:36.996837  278230 system_pods.go:61] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running
	I1206 09:09:36.996840  278230 system_pods.go:61] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running
	I1206 09:09:36.996844  278230 system_pods.go:61] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running
	I1206 09:09:36.996851  278230 system_pods.go:61] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running
	I1206 09:09:36.996863  278230 system_pods.go:74] duration metric: took 4.161069ms to wait for pod list to return data ...
	I1206 09:09:36.996872  278230 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:09:36.999335  278230 default_sa.go:45] found service account: "default"
	I1206 09:09:36.999356  278230 default_sa.go:55] duration metric: took 2.476994ms for default service account to be created ...
	I1206 09:09:36.999364  278230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:09:37.002125  278230 system_pods.go:86] 8 kube-system pods found
	I1206 09:09:37.002156  278230 system_pods.go:89] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:09:37.002172  278230 system_pods.go:89] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running
	I1206 09:09:37.002190  278230 system_pods.go:89] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running
	I1206 09:09:37.002197  278230 system_pods.go:89] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running
	I1206 09:09:37.002203  278230 system_pods.go:89] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running
	I1206 09:09:37.002211  278230 system_pods.go:89] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running
	I1206 09:09:37.002216  278230 system_pods.go:89] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running
	I1206 09:09:37.002221  278230 system_pods.go:89] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running
	I1206 09:09:37.002251  278230 retry.go:31] will retry after 266.325725ms: missing components: kube-dns
	I1206 09:09:37.272819  278230 system_pods.go:86] 8 kube-system pods found
	I1206 09:09:37.272869  278230 system_pods.go:89] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:09:37.272877  278230 system_pods.go:89] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running
	I1206 09:09:37.272886  278230 system_pods.go:89] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running
	I1206 09:09:37.272892  278230 system_pods.go:89] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running
	I1206 09:09:37.272903  278230 system_pods.go:89] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running
	I1206 09:09:37.272910  278230 system_pods.go:89] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running
	I1206 09:09:37.272916  278230 system_pods.go:89] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running
	I1206 09:09:37.272924  278230 system_pods.go:89] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running
	I1206 09:09:37.272941  278230 retry.go:31] will retry after 366.153863ms: missing components: kube-dns
	I1206 09:09:37.642787  278230 system_pods.go:86] 8 kube-system pods found
	I1206 09:09:37.642823  278230 system_pods.go:89] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:09:37.642829  278230 system_pods.go:89] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running
	I1206 09:09:37.642833  278230 system_pods.go:89] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running
	I1206 09:09:37.642837  278230 system_pods.go:89] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running
	I1206 09:09:37.642842  278230 system_pods.go:89] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running
	I1206 09:09:37.642845  278230 system_pods.go:89] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running
	I1206 09:09:37.642851  278230 system_pods.go:89] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running
	I1206 09:09:37.642857  278230 system_pods.go:89] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running
	I1206 09:09:37.642872  278230 retry.go:31] will retry after 387.591644ms: missing components: kube-dns
	I1206 09:09:38.035290  278230 system_pods.go:86] 8 kube-system pods found
	I1206 09:09:38.035321  278230 system_pods.go:89] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Running
	I1206 09:09:38.035326  278230 system_pods.go:89] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running
	I1206 09:09:38.035330  278230 system_pods.go:89] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running
	I1206 09:09:38.035333  278230 system_pods.go:89] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running
	I1206 09:09:38.035336  278230 system_pods.go:89] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running
	I1206 09:09:38.035339  278230 system_pods.go:89] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running
	I1206 09:09:38.035345  278230 system_pods.go:89] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running
	I1206 09:09:38.035348  278230 system_pods.go:89] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running
	I1206 09:09:38.035355  278230 system_pods.go:126] duration metric: took 1.035985474s to wait for k8s-apps to be running ...
	I1206 09:09:38.035365  278230 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:09:38.035406  278230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:38.048393  278230 system_svc.go:56] duration metric: took 13.014548ms WaitForService to wait for kubelet
	I1206 09:09:38.048431  278230 kubeadm.go:587] duration metric: took 12.091113476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:09:38.048452  278230 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:09:38.051317  278230 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:09:38.051345  278230 node_conditions.go:123] node cpu capacity is 8
	I1206 09:09:38.051369  278230 node_conditions.go:105] duration metric: took 2.911992ms to run NodePressure ...
	I1206 09:09:38.051387  278230 start.go:242] waiting for startup goroutines ...
	I1206 09:09:38.051397  278230 start.go:247] waiting for cluster config update ...
	I1206 09:09:38.051414  278230 start.go:256] writing updated cluster config ...
	I1206 09:09:38.051707  278230 ssh_runner.go:195] Run: rm -f paused
	I1206 09:09:38.055320  278230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:09:38.058959  278230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x87kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.062946  278230 pod_ready.go:94] pod "coredns-66bc5c9577-x87kt" is "Ready"
	I1206 09:09:38.062967  278230 pod_ready.go:86] duration metric: took 3.983818ms for pod "coredns-66bc5c9577-x87kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.064794  278230 pod_ready.go:83] waiting for pod "etcd-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.068548  278230 pod_ready.go:94] pod "etcd-embed-certs-931091" is "Ready"
	I1206 09:09:38.068571  278230 pod_ready.go:86] duration metric: took 3.759649ms for pod "etcd-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.070433  278230 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.076497  278230 pod_ready.go:94] pod "kube-apiserver-embed-certs-931091" is "Ready"
	I1206 09:09:38.076527  278230 pod_ready.go:86] duration metric: took 6.07383ms for pod "kube-apiserver-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.078569  278230 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.460122  278230 pod_ready.go:94] pod "kube-controller-manager-embed-certs-931091" is "Ready"
	I1206 09:09:38.460152  278230 pod_ready.go:86] duration metric: took 381.563828ms for pod "kube-controller-manager-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:38.660717  278230 pod_ready.go:83] waiting for pod "kube-proxy-9hp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:39.060371  278230 pod_ready.go:94] pod "kube-proxy-9hp5d" is "Ready"
	I1206 09:09:39.060399  278230 pod_ready.go:86] duration metric: took 399.657696ms for pod "kube-proxy-9hp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:39.261143  278230 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:39.660556  278230 pod_ready.go:94] pod "kube-scheduler-embed-certs-931091" is "Ready"
	I1206 09:09:39.660585  278230 pod_ready.go:86] duration metric: took 399.397522ms for pod "kube-scheduler-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:09:39.660599  278230 pod_ready.go:40] duration metric: took 1.60525047s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:09:39.719374  278230 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:09:39.721615  278230 out.go:179] * Done! kubectl is now configured to use "embed-certs-931091" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.02397644Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.024941456Z" level=info msg="Ran pod sandbox 2796aba7acda2163054f7f1fae5f574924d13695c102c1217eb8ce96e82b41a8 with infra container: kube-system/kindnet-6q6w2/POD" id=7ce09570-953d-47ba-8404-dee2da2b7739 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.026180184Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e586a30c-d053-41ad-a153-84913790765b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.026917999Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-46zxv/POD" id=727faced-171f-407e-883b-d55f0aee1c14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.026975663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.027082319Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f43f1478-6ad1-4baa-9a9e-6a5519acd45b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.030070727Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=727faced-171f-407e-883b-d55f0aee1c14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.031742008Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.031751309Z" level=info msg="Creating container: kube-system/kindnet-6q6w2/kindnet-cni" id=bfbd5f66-7e55-41d5-85b6-dcd460f3000e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.031982508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.032791963Z" level=info msg="Ran pod sandbox 8ea223a3a841283180855b44e01388e72e8a2581085b3af9fcf4b6b08d4cb9dc with infra container: kube-system/kube-proxy-46zxv/POD" id=727faced-171f-407e-883b-d55f0aee1c14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.033797833Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7f1baeb7-8c6f-4233-912f-3dc6de86fb36 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.034946527Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bca1c863-a18d-41e9-ac42-02134d3ba873 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.035518317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.036320983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.039136114Z" level=info msg="Creating container: kube-system/kube-proxy-46zxv/kube-proxy" id=98978f94-9425-4e40-a52a-6b02b7d5392d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.039266126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.043147671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.044183829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.064441835Z" level=info msg="Created container c8bacf9173758120f64be7878b9b230ce21220bda71873ea9869aa6c4a0c7c0c: kube-system/kindnet-6q6w2/kindnet-cni" id=bfbd5f66-7e55-41d5-85b6-dcd460f3000e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.06517615Z" level=info msg="Starting container: c8bacf9173758120f64be7878b9b230ce21220bda71873ea9869aa6c4a0c7c0c" id=1336e38f-8fe7-4bac-a424-811cc1aec157 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.067072569Z" level=info msg="Started container" PID=1575 containerID=c8bacf9173758120f64be7878b9b230ce21220bda71873ea9869aa6c4a0c7c0c description=kube-system/kindnet-6q6w2/kindnet-cni id=1336e38f-8fe7-4bac-a424-811cc1aec157 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2796aba7acda2163054f7f1fae5f574924d13695c102c1217eb8ce96e82b41a8
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.067801273Z" level=info msg="Created container 9e3ca1badb80c9e442f37c2f39484464e099b4e0f12b8e56f13f22db0894cb87: kube-system/kube-proxy-46zxv/kube-proxy" id=98978f94-9425-4e40-a52a-6b02b7d5392d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.068368746Z" level=info msg="Starting container: 9e3ca1badb80c9e442f37c2f39484464e099b4e0f12b8e56f13f22db0894cb87" id=0d220394-46cb-45ae-b18e-201ea757d127 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:39 newest-cni-718157 crio[773]: time="2025-12-06T09:09:39.071053457Z" level=info msg="Started container" PID=1576 containerID=9e3ca1badb80c9e442f37c2f39484464e099b4e0f12b8e56f13f22db0894cb87 description=kube-system/kube-proxy-46zxv/kube-proxy id=0d220394-46cb-45ae-b18e-201ea757d127 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ea223a3a841283180855b44e01388e72e8a2581085b3af9fcf4b6b08d4cb9dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9e3ca1badb80c       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   8ea223a3a8412       kube-proxy-46zxv                            kube-system
	c8bacf9173758       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   2796aba7acda2       kindnet-6q6w2                               kube-system
	5a46a00404bc3       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   94df1ab989db4       kube-scheduler-newest-cni-718157            kube-system
	8594ebe0ce4d4       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   2368efec8953f       kube-apiserver-newest-cni-718157            kube-system
	7e80f2bfe9150       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   d49b21ee66c62       etcd-newest-cni-718157                      kube-system
	4be31d9d78a2c       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   f2e9038354af8       kube-controller-manager-newest-cni-718157   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718157
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-718157
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=newest-cni-718157
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718157
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:09:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:09:33 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:09:33 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:09:33 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 06 Dec 2025 09:09:33 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-718157
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ec2caef7-c7e1-47e9-abcb-e0e0655dbe92
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718157                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-6q6w2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-718157             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-718157    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-46zxv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-718157             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-718157 event: Registered Node newest-cni-718157 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [7e80f2bfe9150c605116b0543c9f3c16f295540e10217fd0ba23a974cccfb1ec] <==
	{"level":"warn","ts":"2025-12-06T09:09:29.919645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.927440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.936015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.947187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.956969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.969107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.975474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.987331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:29.996304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.004278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.013155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.021527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.037238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.044645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.052708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.061605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.070078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.077783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.087764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.099427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.107488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.126546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.139775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.146731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:30.195969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37818","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:40 up 52 min,  0 user,  load average: 5.14, 2.88, 1.93
	Linux newest-cni-718157 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c8bacf9173758120f64be7878b9b230ce21220bda71873ea9869aa6c4a0c7c0c] <==
	I1206 09:09:39.268268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:09:39.268742       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:09:39.268901       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:09:39.268916       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:09:39.268941       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:09:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:09:39.565187       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:09:39.565222       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:09:39.565236       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:09:39.565352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:09:40.065942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:09:40.065979       1 metrics.go:72] Registering metrics
	I1206 09:09:40.066105       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8594ebe0ce4d4eadbe99e8f2ff2d81d8380cea49b80ee631ad9d2c57cdc35676] <==
	I1206 09:09:30.736213       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:09:30.736219       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:09:30.736226       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:09:30.737227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:09:30.739024       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:30.739043       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1206 09:09:30.745468       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:30.928781       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:09:31.639775       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1206 09:09:31.643182       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:09:31.643198       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:09:32.188504       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:09:32.233562       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:09:32.343715       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:09:32.350560       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1206 09:09:32.351877       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:09:32.355562       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:09:32.681497       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:09:33.410685       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:09:33.420048       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:09:33.426808       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:09:38.333885       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:09:38.540636       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:38.549309       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:38.687820       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4be31d9d78a2c802043c980cc2233c0acf9b01bf0971dd0d2bc92f2a2753ebf2] <==
	I1206 09:09:37.490739       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.490771       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491038       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491186       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491438       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491510       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491621       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491644       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491649       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491657       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491781       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491850       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:09:37.491866       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.491903       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:09:37.491928       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:37.491939       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.492152       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:37.491929       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.496176       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.501885       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-718157" podCIDRs=["10.42.0.0/24"]
	I1206 09:09:37.592100       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:37.592120       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:09:37.592126       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:09:37.592268       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9e3ca1badb80c9e442f37c2f39484464e099b4e0f12b8e56f13f22db0894cb87] <==
	I1206 09:09:39.105248       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:09:39.183732       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:39.283935       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:39.283975       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:09:39.284138       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:09:39.317556       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:09:39.317633       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:09:39.339283       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:09:39.342433       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:09:39.342834       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:39.347717       1 config.go:200] "Starting service config controller"
	I1206 09:09:39.347755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:09:39.348851       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:09:39.348881       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:09:39.350140       1 config.go:309] "Starting node config controller"
	I1206 09:09:39.350163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:09:39.352276       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:09:39.352499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:09:39.448323       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:09:39.449497       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:09:39.452653       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:09:39.452668       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5a46a00404bc3d0f2379492770ab7fd083a54199979da3487e266f707c555ef8] <==
	E1206 09:09:31.535827       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1206 09:09:31.536625       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:09:31.547948       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:09:31.548888       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1206 09:09:31.567387       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:09:31.568444       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1206 09:09:31.587168       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:09:31.588491       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:09:31.749747       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:09:31.750747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1206 09:09:31.753730       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:09:31.754702       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:09:31.785600       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1206 09:09:31.786870       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1206 09:09:31.852261       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:09:31.853365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1206 09:09:31.883168       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1206 09:09:31.884263       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1206 09:09:31.884395       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1206 09:09:31.885309       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1206 09:09:31.892513       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1206 09:09:31.893608       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1206 09:09:31.963146       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1206 09:09:31.964263       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	I1206 09:09:34.290346       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:09:34 newest-cni-718157 kubelet[1294]: E1206 09:09:34.291851    1294 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718157\" already exists" pod="kube-system/kube-apiserver-newest-cni-718157"
	Dec 06 09:09:34 newest-cni-718157 kubelet[1294]: E1206 09:09:34.291940    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-718157" containerName="kube-apiserver"
	Dec 06 09:09:34 newest-cni-718157 kubelet[1294]: E1206 09:09:34.292222    1294 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-718157\" already exists" pod="kube-system/etcd-newest-cni-718157"
	Dec 06 09:09:34 newest-cni-718157 kubelet[1294]: E1206 09:09:34.292278    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-718157" containerName="etcd"
	Dec 06 09:09:34 newest-cni-718157 kubelet[1294]: I1206 09:09:34.300090    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-718157" podStartSLOduration=1.300070168 podStartE2EDuration="1.300070168s" podCreationTimestamp="2025-12-06 09:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:34.3000706 +0000 UTC m=+1.144559902" watchObservedRunningTime="2025-12-06 09:09:34.300070168 +0000 UTC m=+1.144559469"
	Dec 06 09:09:34 newest-cni-718157 kubelet[1294]: I1206 09:09:34.300199    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-718157" podStartSLOduration=2.300194088 podStartE2EDuration="2.300194088s" podCreationTimestamp="2025-12-06 09:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:34.289210813 +0000 UTC m=+1.133700114" watchObservedRunningTime="2025-12-06 09:09:34.300194088 +0000 UTC m=+1.144683390"
	Dec 06 09:09:35 newest-cni-718157 kubelet[1294]: E1206 09:09:35.285470    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-718157" containerName="kube-apiserver"
	Dec 06 09:09:35 newest-cni-718157 kubelet[1294]: E1206 09:09:35.286083    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:35 newest-cni-718157 kubelet[1294]: E1206 09:09:35.286411    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-718157" containerName="etcd"
	Dec 06 09:09:36 newest-cni-718157 kubelet[1294]: E1206 09:09:36.287613    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-718157" containerName="etcd"
	Dec 06 09:09:36 newest-cni-718157 kubelet[1294]: E1206 09:09:36.287879    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:36 newest-cni-718157 kubelet[1294]: E1206 09:09:36.288101    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-718157" containerName="kube-apiserver"
	Dec 06 09:09:37 newest-cni-718157 kubelet[1294]: E1206 09:09:37.290379    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:37 newest-cni-718157 kubelet[1294]: I1206 09:09:37.589572    1294 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 06 09:09:37 newest-cni-718157 kubelet[1294]: I1206 09:09:37.590300    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.781653    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-lib-modules\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782098    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthpx\" (UniqueName: \"kubernetes.io/projected/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-kube-api-access-sthpx\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782223    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk52g\" (UniqueName: \"kubernetes.io/projected/740bbe6b-e50c-4cf4-b593-5f871820515c-kube-api-access-wk52g\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782291    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-xtables-lock\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782337    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-kube-proxy\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782433    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-lib-modules\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782479    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-cni-cfg\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:38 newest-cni-718157 kubelet[1294]: I1206 09:09:38.782506    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-xtables-lock\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:39 newest-cni-718157 kubelet[1294]: I1206 09:09:39.341151    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-6q6w2" podStartSLOduration=1.341129767 podStartE2EDuration="1.341129767s" podCreationTimestamp="2025-12-06 09:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:39.340805978 +0000 UTC m=+6.185295304" watchObservedRunningTime="2025-12-06 09:09:39.341129767 +0000 UTC m=+6.185619063"
	Dec 06 09:09:39 newest-cni-718157 kubelet[1294]: I1206 09:09:39.341484    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-46zxv" podStartSLOduration=1.3414732360000001 podStartE2EDuration="1.341473236s" podCreationTimestamp="2025-12-06 09:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:39.318396031 +0000 UTC m=+6.162885331" watchObservedRunningTime="2025-12-06 09:09:39.341473236 +0000 UTC m=+6.185962537"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718157 -n newest-cni-718157
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718157 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-4xnvs storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner: exit status 1 (60.795617ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4xnvs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-931091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-931091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.403662ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-931091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-931091 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-931091 describe deploy/metrics-server -n kube-system: exit status 1 (59.00691ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-931091 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-931091
helpers_test.go:243: (dbg) docker inspect embed-certs-931091:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63",
	        "Created": "2025-12-06T09:09:01.161536877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279533,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:09:01.197017246Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/hostname",
	        "HostsPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/hosts",
	        "LogPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63-json.log",
	        "Name": "/embed-certs-931091",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-931091:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-931091",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63",
	                "LowerDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/merged",
	                "UpperDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/diff",
	                "WorkDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-931091",
	                "Source": "/var/lib/docker/volumes/embed-certs-931091/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-931091",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-931091",
	                "name.minikube.sigs.k8s.io": "embed-certs-931091",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7a7a2caff373bcb7068e27a159f5f80c8a3393bc14756673794daebe7bf0abc",
	            "SandboxKey": "/var/run/docker/netns/b7a7a2caff37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-931091": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70ecd367dba42d1818bd7c40275791d03131ddf8b1c44024d97d10092da13f1c",
	                    "EndpointID": "a8e2021f049c4c5a212180c5e58dfabc3aafdc0f917bdfaeedf41e7603791f84",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9e:72:d1:98:40:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-931091",
	                        "6aa3c5072933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-931091 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-931091 logs -n 25: (1.021077202s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                            │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                           │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                         │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                                      │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p auto-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-718157 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-931091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:44.481140  296022 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:44.481466  296022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:44.481478  296022 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:44.481485  296022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:44.481758  296022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:44.482365  296022 out.go:368] Setting JSON to false
	I1206 09:09:44.483661  296022 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3135,"bootTime":1765009049,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:44.483724  296022 start.go:143] virtualization: kvm guest
	I1206 09:09:44.485707  296022 out.go:179] * [newest-cni-718157] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:44.487187  296022 notify.go:221] Checking for updates...
	I1206 09:09:44.487203  296022 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:44.488662  296022 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:44.489948  296022 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:44.491184  296022 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:44.495524  296022 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:44.496714  296022 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:44.498401  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:44.499006  296022 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:44.522805  296022 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:44.522907  296022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:44.578942  296022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:44.569070956 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:44.579099  296022 docker.go:319] overlay module found
	I1206 09:09:44.580885  296022 out.go:179] * Using the docker driver based on existing profile
	I1206 09:09:44.582071  296022 start.go:309] selected driver: docker
	I1206 09:09:44.582087  296022 start.go:927] validating driver "docker" against &{Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:44.582189  296022 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:44.582819  296022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:44.639949  296022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:44.63090625 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:44.640283  296022 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:09:44.640317  296022 cni.go:84] Creating CNI manager for ""
	I1206 09:09:44.640364  296022 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:44.640416  296022 start.go:353] cluster config:
	{Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:44.642383  296022 out.go:179] * Starting "newest-cni-718157" primary control-plane node in "newest-cni-718157" cluster
	I1206 09:09:44.643675  296022 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:44.644874  296022 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:44.665443  289573 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:09:44.665509  289573 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:09:44.665628  289573 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:09:44.665737  289573 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:09:44.665795  289573 kubeadm.go:319] OS: Linux
	I1206 09:09:44.665865  289573 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:09:44.665947  289573 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:09:44.666038  289573 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:09:44.666121  289573 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:09:44.666203  289573 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:09:44.666279  289573 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:09:44.666375  289573 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:09:44.666422  289573 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:09:44.666516  289573 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:09:44.666665  289573 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:09:44.666808  289573 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:09:44.666905  289573 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:09:44.668615  289573 out.go:252]   - Generating certificates and keys ...
	I1206 09:09:44.668699  289573 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:09:44.668782  289573 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:09:44.668875  289573 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:09:44.668944  289573 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:09:44.669035  289573 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:09:44.669078  289573 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:09:44.669125  289573 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:09:44.669225  289573 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:09:44.669293  289573 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:09:44.669461  289573 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:09:44.669553  289573 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:09:44.669624  289573 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:09:44.669662  289573 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:09:44.669708  289573 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:09:44.669749  289573 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:09:44.669799  289573 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:09:44.669857  289573 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:09:44.669915  289573 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:09:44.669977  289573 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:09:44.670071  289573 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:09:44.670126  289573 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:09:44.671573  289573 out.go:252]   - Booting up control plane ...
	I1206 09:09:44.671656  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:09:44.671732  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:09:44.671819  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:09:44.671928  289573 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:09:44.672037  289573 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:09:44.672121  289573 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:09:44.672242  289573 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:09:44.672304  289573 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:09:44.672443  289573 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:09:44.672574  289573 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:09:44.672667  289573 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.699798ms
	I1206 09:09:44.672783  289573 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:09:44.672905  289573 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:09:44.673043  289573 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:09:44.673142  289573 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:09:44.673228  289573 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.714776717s
	I1206 09:09:44.673318  289573 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.957962814s
	I1206 09:09:44.673404  289573 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501917325s
	I1206 09:09:44.673513  289573 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:09:44.673614  289573 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:09:44.673666  289573 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:09:44.673822  289573 kubeadm.go:319] [mark-control-plane] Marking the node auto-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:09:44.673867  289573 kubeadm.go:319] [bootstrap-token] Using token: sx7844.6ut2unu1ekbq276s
	I1206 09:09:44.675185  289573 out.go:252]   - Configuring RBAC rules ...
	I1206 09:09:44.675273  289573 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:09:44.675339  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:09:44.675487  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:09:44.675648  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:09:44.675801  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:09:44.675911  289573 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:09:44.676080  289573 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:09:44.676143  289573 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:09:44.676210  289573 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:09:44.676228  289573 kubeadm.go:319] 
	I1206 09:09:44.676334  289573 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:09:44.676344  289573 kubeadm.go:319] 
	I1206 09:09:44.676456  289573 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:09:44.676465  289573 kubeadm.go:319] 
	I1206 09:09:44.676504  289573 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:09:44.676580  289573 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:09:44.676642  289573 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:09:44.676666  289573 kubeadm.go:319] 
	I1206 09:09:44.676741  289573 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:09:44.676752  289573 kubeadm.go:319] 
	I1206 09:09:44.676806  289573 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:09:44.676815  289573 kubeadm.go:319] 
	I1206 09:09:44.676867  289573 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:09:44.676950  289573 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:09:44.677071  289573 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:09:44.677086  289573 kubeadm.go:319] 
	I1206 09:09:44.677190  289573 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:09:44.677289  289573 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:09:44.677299  289573 kubeadm.go:319] 
	I1206 09:09:44.677408  289573 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sx7844.6ut2unu1ekbq276s \
	I1206 09:09:44.677524  289573 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:09:44.677546  289573 kubeadm.go:319] 	--control-plane 
	I1206 09:09:44.677559  289573 kubeadm.go:319] 
	I1206 09:09:44.677679  289573 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:09:44.677701  289573 kubeadm.go:319] 
	I1206 09:09:44.677804  289573 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sx7844.6ut2unu1ekbq276s \
	I1206 09:09:44.677974  289573 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:09:44.678025  289573 cni.go:84] Creating CNI manager for ""
	I1206 09:09:44.678041  289573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:44.680203  289573 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:09:44.645971  296022 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:44.646014  296022 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:44.646027  296022 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:44.646085  296022 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:44.646135  296022 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:44.646151  296022 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:09:44.646240  296022 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/config.json ...
	I1206 09:09:44.668198  296022 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:44.668220  296022 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:44.668234  296022 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:44.668262  296022 start.go:360] acquireMachinesLock for newest-cni-718157: {Name:mkd215ec128fd4b5f2323afe6abf6121f194a6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:44.668314  296022 start.go:364] duration metric: took 35.42µs to acquireMachinesLock for "newest-cni-718157"
	I1206 09:09:44.668331  296022 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:09:44.668339  296022 fix.go:54] fixHost starting: 
	I1206 09:09:44.668544  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:44.690123  296022 fix.go:112] recreateIfNeeded on newest-cni-718157: state=Stopped err=<nil>
	W1206 09:09:44.690153  296022 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:09:44.681497  289573 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:09:44.687346  289573 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:09:44.687367  289573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:09:44.702056  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:09:44.964321  289573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:09:44.964415  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:44.964438  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-646473 minikube.k8s.io/updated_at=2025_12_06T09_09_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=auto-646473 minikube.k8s.io/primary=true
	I1206 09:09:44.976722  289573 ops.go:34] apiserver oom_adj: -16
	I1206 09:09:45.056190  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:45.556870  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 06 09:09:36 embed-certs-931091 crio[775]: time="2025-12-06T09:09:36.882023514Z" level=info msg="Starting container: 839f0e583b799a580defb142dbc4a21a790cb36e616fefea6af217e166539ad9" id=feb00dd0-9af9-4ff0-9d09-dc8a82dedad8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:36 embed-certs-931091 crio[775]: time="2025-12-06T09:09:36.884932493Z" level=info msg="Started container" PID=1865 containerID=839f0e583b799a580defb142dbc4a21a790cb36e616fefea6af217e166539ad9 description=kube-system/coredns-66bc5c9577-x87kt/coredns id=feb00dd0-9af9-4ff0-9d09-dc8a82dedad8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=de4f3e3079631618d4856fda8d750ccb6e4e29a3e55cf5780a40f09a955337df
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.235979356Z" level=info msg="Running pod sandbox: default/busybox/POD" id=790fe9a9-dd82-47fb-8cd1-4655031e3515 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.236534134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.243183791Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:66b6af47895c30a415914026e7c2339174333e8adbda9637125a2653c6f0e42b UID:0721acc9-3cc1-45bb-b49b-5ab87b43bb99 NetNS:/var/run/netns/42cdbfb0-2a1c-41c7-bf68-b6d3cb69392b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a9c0}] Aliases:map[]}"
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.243211587Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.254210208Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:66b6af47895c30a415914026e7c2339174333e8adbda9637125a2653c6f0e42b UID:0721acc9-3cc1-45bb-b49b-5ab87b43bb99 NetNS:/var/run/netns/42cdbfb0-2a1c-41c7-bf68-b6d3cb69392b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a9c0}] Aliases:map[]}"
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.254384291Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.255274624Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.256373728Z" level=info msg="Ran pod sandbox 66b6af47895c30a415914026e7c2339174333e8adbda9637125a2653c6f0e42b with infra container: default/busybox/POD" id=790fe9a9-dd82-47fb-8cd1-4655031e3515 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.257692642Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd063771-87e5-4718-b138-cac1aa9e6f3c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.257842559Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cd063771-87e5-4718-b138-cac1aa9e6f3c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.257892522Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cd063771-87e5-4718-b138-cac1aa9e6f3c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.258715579Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f43aae81-6411-41dc-8581-ef7023af072c name=/runtime.v1.ImageService/PullImage
	Dec 06 09:09:40 embed-certs-931091 crio[775]: time="2025-12-06T09:09:40.263242651Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.695533326Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f43aae81-6411-41dc-8581-ef7023af072c name=/runtime.v1.ImageService/PullImage
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.69637009Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dfd15ac9-5f21-4a11-bcd2-38a70071ba3f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.697803352Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7be26d91-eebc-4882-9356-b4bf8df7bc87 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.70128258Z" level=info msg="Creating container: default/busybox/busybox" id=60d8ee5d-e9c2-4529-b49a-7d8e9bd3a9db name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.701431851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.706160508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.706732016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.738806974Z" level=info msg="Created container 09555eeea78f88edca8acdc4af3945ad230350f146b0146c3fc7a100ed3cbb69: default/busybox/busybox" id=60d8ee5d-e9c2-4529-b49a-7d8e9bd3a9db name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.739519648Z" level=info msg="Starting container: 09555eeea78f88edca8acdc4af3945ad230350f146b0146c3fc7a100ed3cbb69" id=0ab6c672-c9a8-4bda-8158-e7235db45f0b name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:41 embed-certs-931091 crio[775]: time="2025-12-06T09:09:41.741757009Z" level=info msg="Started container" PID=1942 containerID=09555eeea78f88edca8acdc4af3945ad230350f146b0146c3fc7a100ed3cbb69 description=default/busybox/busybox id=0ab6c672-c9a8-4bda-8158-e7235db45f0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=66b6af47895c30a415914026e7c2339174333e8adbda9637125a2653c6f0e42b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	09555eeea78f8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   66b6af47895c3       busybox                                      default
	839f0e583b799       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   de4f3e3079631       coredns-66bc5c9577-x87kt                     kube-system
	82a8c0628c0b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   1e0651e942dd2       storage-provisioner                          kube-system
	a9ab9337b25e7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   ca6bdedfa113b       kube-proxy-9hp5d                             kube-system
	03d93b6b400dd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   8c81e0636279a       kindnet-kzpz2                                kube-system
	318322c324e23       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      35 seconds ago      Running             kube-apiserver            0                   8c8e214c7a2ce       kube-apiserver-embed-certs-931091            kube-system
	3b773ee15283b       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      35 seconds ago      Running             kube-scheduler            0                   fc0e4fca50164       kube-scheduler-embed-certs-931091            kube-system
	dc781e6b10199       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   834bb3e7f054c       etcd-embed-certs-931091                      kube-system
	6d013b56b17b8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      35 seconds ago      Running             kube-controller-manager   0                   1055fa325216a       kube-controller-manager-embed-certs-931091   kube-system
	
	
	==> coredns [839f0e583b799a580defb142dbc4a21a790cb36e616fefea6af217e166539ad9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39206 - 63251 "HINFO IN 5505978342393723015.7041464987818433884. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046070201s
	
	
	==> describe nodes <==
	Name:               embed-certs-931091
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-931091
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=embed-certs-931091
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-931091
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:09:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:09:36 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:09:36 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:09:36 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:09:36 +0000   Sat, 06 Dec 2025 09:09:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-931091
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ca3719f5-d0e6-4020-bdb6-8b9c5b73b4fa
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-x87kt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-931091                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-kzpz2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-931091             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-931091    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-9hp5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-931091             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-931091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-931091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-931091 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node embed-certs-931091 event: Registered Node embed-certs-931091 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-931091 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [dc781e6b10199ff2a2c1a3b8de8e80b86bb2774e3f23dccab89aac6548b86ea6] <==
	{"level":"info","ts":"2025-12-06T09:09:17.894314Z","caller":"traceutil/trace.go:172","msg":"trace[1712218800] linearizableReadLoop","detail":"{readStateIndex:198; appliedIndex:197; }","duration":"178.566486ms","start":"2025-12-06T09:09:17.715733Z","end":"2025-12-06T09:09:17.894300Z","steps":["trace[1712218800] 'read index received'  (duration: 26.301µs)","trace[1712218800] 'applied index is now lower than readState.Index'  (duration: 178.539115ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:09:17.894429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.683573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:09:17.894459Z","caller":"traceutil/trace.go:172","msg":"trace[1084661697] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:194; }","duration":"178.729075ms","start":"2025-12-06T09:09:17.715723Z","end":"2025-12-06T09:09:17.894452Z","steps":["trace[1084661697] 'agreement among raft nodes before linearized reading'  (duration: 178.64336ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:09:17.894387Z","caller":"traceutil/trace.go:172","msg":"trace[1518660357] transaction","detail":"{read_only:false; response_revision:194; number_of_response:1; }","duration":"309.16094ms","start":"2025-12-06T09:09:17.585202Z","end":"2025-12-06T09:09:17.894363Z","steps":["trace[1518660357] 'process raft request'  (duration: 130.327371ms)","trace[1518660357] 'compare'  (duration: 178.433159ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:09:17.894756Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:09:17.585186Z","time spent":"309.505377ms","remote":"127.0.0.1:58986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":694,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" value_size:624 >> failure:<>"}
	{"level":"info","ts":"2025-12-06T09:09:18.041854Z","caller":"traceutil/trace.go:172","msg":"trace[1005495899] transaction","detail":"{read_only:false; response_revision:195; number_of_response:1; }","duration":"140.910563ms","start":"2025-12-06T09:09:17.900923Z","end":"2025-12-06T09:09:18.041833Z","steps":["trace[1005495899] 'process raft request'  (duration: 122.646008ms)","trace[1005495899] 'compare'  (duration: 18.141391ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:09:18.338338Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.504285ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:09:18.338395Z","caller":"traceutil/trace.go:172","msg":"trace[624672583] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:controller:cloud-provider; range_end:; response_count:0; response_revision:198; }","duration":"195.579636ms","start":"2025-12-06T09:09:18.142804Z","end":"2025-12-06T09:09:18.338384Z","steps":["trace[624672583] 'range keys from in-memory index tree'  (duration: 195.416266ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:09:18.338450Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.819996ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790495226063348 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-931091\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-931091\" value_size:3306 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:09:18.338530Z","caller":"traceutil/trace.go:172","msg":"trace[1130722082] transaction","detail":"{read_only:false; response_revision:199; number_of_response:1; }","duration":"148.091863ms","start":"2025-12-06T09:09:18.190426Z","end":"2025-12-06T09:09:18.338518Z","steps":["trace[1130722082] 'process raft request'  (duration: 21.157946ms)","trace[1130722082] 'compare'  (duration: 126.697117ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:09:18.546730Z","caller":"traceutil/trace.go:172","msg":"trace[1829843651] transaction","detail":"{read_only:false; response_revision:200; number_of_response:1; }","duration":"204.327949ms","start":"2025-12-06T09:09:18.342376Z","end":"2025-12-06T09:09:18.546704Z","steps":["trace[1829843651] 'process raft request'  (duration: 144.964633ms)","trace[1829843651] 'compare'  (duration: 59.232574ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:09:19.051289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.568403ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790495226063361 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:40899af2ec23ea00>","response":"size:40"}
	{"level":"warn","ts":"2025-12-06T09:09:19.051427Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:09:18.727833Z","time spent":"323.587808ms","remote":"127.0.0.1:58412","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-12-06T09:09:23.035800Z","caller":"traceutil/trace.go:172","msg":"trace[1304464170] linearizableReadLoop","detail":"{readStateIndex:296; appliedIndex:296; }","duration":"107.453992ms","start":"2025-12-06T09:09:22.928322Z","end":"2025-12-06T09:09:23.035776Z","steps":["trace[1304464170] 'read index received'  (duration: 107.445681ms)","trace[1304464170] 'applied index is now lower than readState.Index'  (duration: 7.329µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:09:23.035935Z","caller":"traceutil/trace.go:172","msg":"trace[1709015975] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"133.672586ms","start":"2025-12-06T09:09:22.902247Z","end":"2025-12-06T09:09:23.035919Z","steps":["trace[1709015975] 'process raft request'  (duration: 133.563088ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:09:23.035935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.597589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:09:23.036014Z","caller":"traceutil/trace.go:172","msg":"trace[1385565921] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"107.669392ms","start":"2025-12-06T09:09:22.928309Z","end":"2025-12-06T09:09:23.035978Z","steps":["trace[1385565921] 'agreement among raft nodes before linearized reading'  (duration: 107.549459ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:09:24.009659Z","caller":"traceutil/trace.go:172","msg":"trace[784362107] linearizableReadLoop","detail":"{readStateIndex:302; appliedIndex:302; }","duration":"123.777262ms","start":"2025-12-06T09:09:23.885855Z","end":"2025-12-06T09:09:24.009632Z","steps":["trace[784362107] 'read index received'  (duration: 123.766968ms)","trace[784362107] 'applied index is now lower than readState.Index'  (duration: 9.257µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:09:24.009798Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.925496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:09:24.009846Z","caller":"traceutil/trace.go:172","msg":"trace[2001789339] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/token-cleaner; range_end:; response_count:0; response_revision:291; }","duration":"123.990042ms","start":"2025-12-06T09:09:23.885843Z","end":"2025-12-06T09:09:24.009833Z","steps":["trace[2001789339] 'agreement among raft nodes before linearized reading'  (duration: 123.873792ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:09:24.010040Z","caller":"traceutil/trace.go:172","msg":"trace[647186247] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"144.207788ms","start":"2025-12-06T09:09:23.865721Z","end":"2025-12-06T09:09:24.009928Z","steps":["trace[647186247] 'process raft request'  (duration: 143.93414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:09:24.300372Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.162908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:09:24.300454Z","caller":"traceutil/trace.go:172","msg":"trace[695688161] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:292; }","duration":"181.251659ms","start":"2025-12-06T09:09:24.119184Z","end":"2025-12-06T09:09:24.300435Z","steps":["trace[695688161] 'agreement among raft nodes before linearized reading'  (duration: 30.10132ms)","trace[695688161] 'range keys from in-memory index tree'  (duration: 151.026916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:09:24.300473Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.058085ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790495226063587 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/token-cleaner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/token-cleaner\" value_size:118 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:09:24.300547Z","caller":"traceutil/trace.go:172","msg":"trace[1223786730] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"284.590443ms","start":"2025-12-06T09:09:24.015944Z","end":"2025-12-06T09:09:24.300534Z","steps":["trace[1223786730] 'process raft request'  (duration: 133.419845ms)","trace[1223786730] 'compare'  (duration: 150.945653ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:09:48 up 52 min,  0 user,  load average: 4.81, 2.85, 1.92
	Linux embed-certs-931091 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03d93b6b400dd61b093cbbc6a5bc755514b4f541ad6d9e7ab9036d5966ec48cf] <==
	I1206 09:09:25.895117       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:09:25.895905       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:09:25.896105       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:09:25.896161       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:09:25.896229       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:09:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:09:26.256041       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:09:26.256159       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:09:26.256179       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:09:26.256381       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:09:26.656606       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:09:26.656639       1 metrics.go:72] Registering metrics
	I1206 09:09:26.656695       1 controller.go:711] "Syncing nftables rules"
	I1206 09:09:36.105734       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:09:36.105781       1 main.go:301] handling current node
	I1206 09:09:46.106621       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:09:46.106673       1 main.go:301] handling current node
	
	
	==> kube-apiserver [318322c324e23b17ad2a45351e97a1145321649be6222be883c877b5a5a274e5] <==
	I1206 09:09:15.715484       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:09:15.722609       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:15.723327       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:09:15.734788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:15.735238       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:09:15.833062       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:09:16.517967       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:09:16.522639       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:09:16.522663       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:09:17.143302       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:09:17.900514       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:09:18.726110       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:09:19.056937       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1206 09:09:19.058280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:09:19.064717       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:09:19.541960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1206 09:09:19.631527       1 watch.go:272] "Unhandled Error" err="write tcp 192.168.103.2:8443->192.168.103.2:37742: write: connection reset by peer" logger="UnhandledError"
	I1206 09:09:19.867512       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:09:19.880415       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:09:19.891037       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:09:25.042874       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:09:25.243149       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1206 09:09:25.295691       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:25.299546       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1206 09:09:47.011521       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:34364: use of closed network connection
	
	
	==> kube-controller-manager [6d013b56b17b8e35f78d3ebe3b13b38f2b2ad6bce8be0e2c057e10d6b7582e8b] <==
	I1206 09:09:24.567110       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:09:24.575442       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:09:24.579645       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:09:24.582832       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:09:24.585058       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:09:24.587397       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 09:09:24.589516       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:09:24.589555       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:09:24.589572       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:09:24.589589       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:09:24.590497       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:09:24.591860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:09:24.591881       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:09:24.595156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:09:24.603732       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-931091" podCIDRs=["10.244.0.0/24"]
	I1206 09:09:24.605039       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:09:24.605135       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:09:24.610340       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:09:24.614505       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:09:24.615644       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:09:24.632038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:09:24.633092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:09:24.633110       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:09:24.633120       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:09:39.545185       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a9ab9337b25e765e58da78ef0385d524b8a5f06277ccc8c70a574a7f4e1b35c2] <==
	I1206 09:09:25.703036       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:09:25.760050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:09:25.860399       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:09:25.860449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:09:25.860556       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:09:25.889751       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:09:25.889821       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:09:25.898142       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:09:25.898910       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:09:25.899081       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:25.901698       1 config.go:200] "Starting service config controller"
	I1206 09:09:25.901718       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:09:25.901744       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:09:25.901751       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:09:25.901777       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:09:25.901782       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:09:25.902724       1 config.go:309] "Starting node config controller"
	I1206 09:09:25.902733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:09:25.902739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:09:26.002906       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:09:26.002976       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:09:26.003022       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3b773ee15283b7ad6f12b1bf59db670fe18660c60053ed682f0d6e2a02b18fee] <==
	E1206 09:09:15.590737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:09:15.590640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:09:15.591025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:09:15.590744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:09:15.590636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:09:15.590835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:09:15.590929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:09:15.590954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:09:15.590824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:09:15.590876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:09:16.487008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:09:16.500825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:09:16.603864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:09:16.640517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:09:16.648297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:09:16.661558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:09:16.702477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:09:16.706839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:09:16.728224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:09:16.729144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:09:16.817185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:09:16.878062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:09:16.901911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:09:16.928168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1206 09:09:18.884409       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:09:20 embed-certs-931091 kubelet[1334]: I1206 09:09:20.890589    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-931091" podStartSLOduration=1.890563445 podStartE2EDuration="1.890563445s" podCreationTimestamp="2025-12-06 09:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:20.879933776 +0000 UTC m=+1.205687298" watchObservedRunningTime="2025-12-06 09:09:20.890563445 +0000 UTC m=+1.216316968"
	Dec 06 09:09:20 embed-certs-931091 kubelet[1334]: I1206 09:09:20.906744    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-931091" podStartSLOduration=1.90672212 podStartE2EDuration="1.90672212s" podCreationTimestamp="2025-12-06 09:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:20.89150328 +0000 UTC m=+1.217256803" watchObservedRunningTime="2025-12-06 09:09:20.90672212 +0000 UTC m=+1.232475642"
	Dec 06 09:09:20 embed-certs-931091 kubelet[1334]: I1206 09:09:20.917154    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-931091" podStartSLOduration=4.91713051 podStartE2EDuration="4.91713051s" podCreationTimestamp="2025-12-06 09:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:20.906893324 +0000 UTC m=+1.232646829" watchObservedRunningTime="2025-12-06 09:09:20.91713051 +0000 UTC m=+1.242884030"
	Dec 06 09:09:20 embed-certs-931091 kubelet[1334]: I1206 09:09:20.917407    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-931091" podStartSLOduration=2.917394844 podStartE2EDuration="2.917394844s" podCreationTimestamp="2025-12-06 09:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:20.91730587 +0000 UTC m=+1.243059395" watchObservedRunningTime="2025-12-06 09:09:20.917394844 +0000 UTC m=+1.243148369"
	Dec 06 09:09:24 embed-certs-931091 kubelet[1334]: I1206 09:09:24.687081    1334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:09:24 embed-certs-931091 kubelet[1334]: I1206 09:09:24.687817    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340187    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76177429-d0e3-430d-b316-9b5894760b2e-kube-proxy\") pod \"kube-proxy-9hp5d\" (UID: \"76177429-d0e3-430d-b316-9b5894760b2e\") " pod="kube-system/kube-proxy-9hp5d"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340255    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qdk\" (UniqueName: \"kubernetes.io/projected/76177429-d0e3-430d-b316-9b5894760b2e-kube-api-access-59qdk\") pod \"kube-proxy-9hp5d\" (UID: \"76177429-d0e3-430d-b316-9b5894760b2e\") " pod="kube-system/kube-proxy-9hp5d"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340288    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6ce4c876-e571-40c7-a764-c47426d42617-cni-cfg\") pod \"kindnet-kzpz2\" (UID: \"6ce4c876-e571-40c7-a764-c47426d42617\") " pod="kube-system/kindnet-kzpz2"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340317    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76177429-d0e3-430d-b316-9b5894760b2e-lib-modules\") pod \"kube-proxy-9hp5d\" (UID: \"76177429-d0e3-430d-b316-9b5894760b2e\") " pod="kube-system/kube-proxy-9hp5d"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340345    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ce4c876-e571-40c7-a764-c47426d42617-xtables-lock\") pod \"kindnet-kzpz2\" (UID: \"6ce4c876-e571-40c7-a764-c47426d42617\") " pod="kube-system/kindnet-kzpz2"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340365    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ce4c876-e571-40c7-a764-c47426d42617-lib-modules\") pod \"kindnet-kzpz2\" (UID: \"6ce4c876-e571-40c7-a764-c47426d42617\") " pod="kube-system/kindnet-kzpz2"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340396    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64v7\" (UniqueName: \"kubernetes.io/projected/6ce4c876-e571-40c7-a764-c47426d42617-kube-api-access-x64v7\") pod \"kindnet-kzpz2\" (UID: \"6ce4c876-e571-40c7-a764-c47426d42617\") " pod="kube-system/kindnet-kzpz2"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.340425    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76177429-d0e3-430d-b316-9b5894760b2e-xtables-lock\") pod \"kube-proxy-9hp5d\" (UID: \"76177429-d0e3-430d-b316-9b5894760b2e\") " pod="kube-system/kube-proxy-9hp5d"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.893249    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kzpz2" podStartSLOduration=0.893225109 podStartE2EDuration="893.225109ms" podCreationTimestamp="2025-12-06 09:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:25.893121765 +0000 UTC m=+6.218875288" watchObservedRunningTime="2025-12-06 09:09:25.893225109 +0000 UTC m=+6.218978629"
	Dec 06 09:09:25 embed-certs-931091 kubelet[1334]: I1206 09:09:25.893384    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9hp5d" podStartSLOduration=0.893370949 podStartE2EDuration="893.370949ms" podCreationTimestamp="2025-12-06 09:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:25.877275869 +0000 UTC m=+6.203029395" watchObservedRunningTime="2025-12-06 09:09:25.893370949 +0000 UTC m=+6.219124472"
	Dec 06 09:09:36 embed-certs-931091 kubelet[1334]: I1206 09:09:36.472877    1334 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:09:36 embed-certs-931091 kubelet[1334]: I1206 09:09:36.624179    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqn4x\" (UniqueName: \"kubernetes.io/projected/f06399c4-e82b-40d6-9eb5-8d37960bfdd4-kube-api-access-dqn4x\") pod \"storage-provisioner\" (UID: \"f06399c4-e82b-40d6-9eb5-8d37960bfdd4\") " pod="kube-system/storage-provisioner"
	Dec 06 09:09:36 embed-certs-931091 kubelet[1334]: I1206 09:09:36.624250    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsz9s\" (UniqueName: \"kubernetes.io/projected/652accc8-2082-4045-b568-7d4a68cd961c-kube-api-access-lsz9s\") pod \"coredns-66bc5c9577-x87kt\" (UID: \"652accc8-2082-4045-b568-7d4a68cd961c\") " pod="kube-system/coredns-66bc5c9577-x87kt"
	Dec 06 09:09:36 embed-certs-931091 kubelet[1334]: I1206 09:09:36.624287    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/652accc8-2082-4045-b568-7d4a68cd961c-config-volume\") pod \"coredns-66bc5c9577-x87kt\" (UID: \"652accc8-2082-4045-b568-7d4a68cd961c\") " pod="kube-system/coredns-66bc5c9577-x87kt"
	Dec 06 09:09:36 embed-certs-931091 kubelet[1334]: I1206 09:09:36.624315    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f06399c4-e82b-40d6-9eb5-8d37960bfdd4-tmp\") pod \"storage-provisioner\" (UID: \"f06399c4-e82b-40d6-9eb5-8d37960bfdd4\") " pod="kube-system/storage-provisioner"
	Dec 06 09:09:36 embed-certs-931091 kubelet[1334]: I1206 09:09:36.903191    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.903159935 podStartE2EDuration="10.903159935s" podCreationTimestamp="2025-12-06 09:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:36.903044944 +0000 UTC m=+17.228798449" watchObservedRunningTime="2025-12-06 09:09:36.903159935 +0000 UTC m=+17.228913462"
	Dec 06 09:09:37 embed-certs-931091 kubelet[1334]: I1206 09:09:37.906105    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x87kt" podStartSLOduration=12.906081883 podStartE2EDuration="12.906081883s" podCreationTimestamp="2025-12-06 09:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:37.905888883 +0000 UTC m=+18.231642406" watchObservedRunningTime="2025-12-06 09:09:37.906081883 +0000 UTC m=+18.231835408"
	Dec 06 09:09:40 embed-certs-931091 kubelet[1334]: I1206 09:09:40.043430    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kr6v\" (UniqueName: \"kubernetes.io/projected/0721acc9-3cc1-45bb-b49b-5ab87b43bb99-kube-api-access-6kr6v\") pod \"busybox\" (UID: \"0721acc9-3cc1-45bb-b49b-5ab87b43bb99\") " pod="default/busybox"
	Dec 06 09:09:41 embed-certs-931091 kubelet[1334]: I1206 09:09:41.913339    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.474355586 podStartE2EDuration="2.913317434s" podCreationTimestamp="2025-12-06 09:09:39 +0000 UTC" firstStartedPulling="2025-12-06 09:09:40.258260591 +0000 UTC m=+20.584014111" lastFinishedPulling="2025-12-06 09:09:41.697222444 +0000 UTC m=+22.022975959" observedRunningTime="2025-12-06 09:09:41.913038508 +0000 UTC m=+22.238792048" watchObservedRunningTime="2025-12-06 09:09:41.913317434 +0000 UTC m=+22.239070956"
	
	
	==> storage-provisioner [82a8c0628c0b89ef82e61d34e4465bb2e8fce499ec7f6f290fcf3bf189cb4dcb] <==
	I1206 09:09:36.878241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:09:36.893207       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:09:36.893378       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:09:36.901587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:36.908499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:09:36.908728       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:09:36.909033       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-931091_b1f77b0f-9781-47b2-82ef-259a7a590ea5!
	I1206 09:09:36.909051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45ccfad5-6c96-43b7-8f37-e4ba5bb38e67", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-931091_b1f77b0f-9781-47b2-82ef-259a7a590ea5 became leader
	W1206 09:09:36.911667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:36.919300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:09:37.009585       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-931091_b1f77b0f-9781-47b2-82ef-259a7a590ea5!
	W1206 09:09:38.923375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:38.928767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:40.932337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:40.936814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:42.939581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:42.944068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:44.948204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:44.953799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:46.956951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:09:46.962296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931091 -n embed-certs-931091
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-931091 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-718157 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-718157 --alsologtostderr -v=1: exit status 80 (2.06204716s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-718157 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:09:55.625413  299629 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:55.625518  299629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:55.625526  299629 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:55.625530  299629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:55.625753  299629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:55.625956  299629 out.go:368] Setting JSON to false
	I1206 09:09:55.625973  299629 mustload.go:66] Loading cluster: newest-cni-718157
	I1206 09:09:55.626314  299629 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:55.626725  299629 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:55.645771  299629 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:55.646020  299629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:55.702769  299629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-06 09:09:55.692227907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:55.703396  299629 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-718157 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:09:55.705133  299629 out.go:179] * Pausing node newest-cni-718157 ... 
	I1206 09:09:55.706242  299629 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:55.706532  299629 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:55.706572  299629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:55.724859  299629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:55.819052  299629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:55.831340  299629 pause.go:52] kubelet running: true
	I1206 09:09:55.831406  299629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:55.975420  299629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:55.975523  299629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:56.041945  299629 cri.go:89] found id: "c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb"
	I1206 09:09:56.041963  299629 cri.go:89] found id: "fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd"
	I1206 09:09:56.041967  299629 cri.go:89] found id: "edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00"
	I1206 09:09:56.041971  299629 cri.go:89] found id: "34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1"
	I1206 09:09:56.041974  299629 cri.go:89] found id: "253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f"
	I1206 09:09:56.041979  299629 cri.go:89] found id: "5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f"
	I1206 09:09:56.041983  299629 cri.go:89] found id: ""
	I1206 09:09:56.042040  299629 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:56.053426  299629 retry.go:31] will retry after 313.109159ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:56Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:56.366922  299629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:56.380545  299629 pause.go:52] kubelet running: false
	I1206 09:09:56.380595  299629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:56.503295  299629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:56.503384  299629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:56.576577  299629 cri.go:89] found id: "c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb"
	I1206 09:09:56.576629  299629 cri.go:89] found id: "fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd"
	I1206 09:09:56.576648  299629 cri.go:89] found id: "edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00"
	I1206 09:09:56.576655  299629 cri.go:89] found id: "34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1"
	I1206 09:09:56.576658  299629 cri.go:89] found id: "253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f"
	I1206 09:09:56.576665  299629 cri.go:89] found id: "5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f"
	I1206 09:09:56.576667  299629 cri.go:89] found id: ""
	I1206 09:09:56.576710  299629 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:56.589150  299629 retry.go:31] will retry after 255.047151ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:56Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:56.844634  299629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:56.857439  299629 pause.go:52] kubelet running: false
	I1206 09:09:56.857500  299629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:56.975024  299629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:56.975096  299629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:57.052242  299629 cri.go:89] found id: "c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb"
	I1206 09:09:57.052269  299629 cri.go:89] found id: "fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd"
	I1206 09:09:57.052276  299629 cri.go:89] found id: "edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00"
	I1206 09:09:57.052280  299629 cri.go:89] found id: "34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1"
	I1206 09:09:57.052284  299629 cri.go:89] found id: "253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f"
	I1206 09:09:57.052289  299629 cri.go:89] found id: "5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f"
	I1206 09:09:57.052293  299629 cri.go:89] found id: ""
	I1206 09:09:57.052341  299629 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:57.069445  299629 retry.go:31] will retry after 342.397402ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:57Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:57.413062  299629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:09:57.425892  299629 pause.go:52] kubelet running: false
	I1206 09:09:57.425940  299629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:09:57.541579  299629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:09:57.541681  299629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:09:57.607814  299629 cri.go:89] found id: "c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb"
	I1206 09:09:57.607834  299629 cri.go:89] found id: "fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd"
	I1206 09:09:57.607839  299629 cri.go:89] found id: "edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00"
	I1206 09:09:57.607844  299629 cri.go:89] found id: "34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1"
	I1206 09:09:57.607848  299629 cri.go:89] found id: "253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f"
	I1206 09:09:57.607853  299629 cri.go:89] found id: "5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f"
	I1206 09:09:57.607857  299629 cri.go:89] found id: ""
	I1206 09:09:57.607903  299629 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:09:57.622283  299629 out.go:203] 
	W1206 09:09:57.623405  299629 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:09:57.623421  299629 out.go:285] * 
	* 
	W1206 09:09:57.627469  299629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:09:57.628866  299629 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-718157 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718157
helpers_test.go:243: (dbg) docker inspect newest-cni-718157:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c",
	        "Created": "2025-12-06T09:09:19.234709377Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:09:44.719667223Z",
	            "FinishedAt": "2025-12-06T09:09:43.82048629Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/hosts",
	        "LogPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c-json.log",
	        "Name": "/newest-cni-718157",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718157:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-718157",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c",
	                "LowerDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718157",
	                "Source": "/var/lib/docker/volumes/newest-cni-718157/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718157",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718157",
	                "name.minikube.sigs.k8s.io": "newest-cni-718157",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "68f0a8081441821afe032e87edc48aa154d77426b4d82bb9b489b39aa91c26a9",
	            "SandboxKey": "/var/run/docker/netns/68f0a8081441",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-718157": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50d0f2baf000bc1c263e721b7068e9545be54f5ae74e0afeafff76b764fd61ec",
	                    "EndpointID": "2d3bdf264e845355f19d4edb3418a8b4921e3fd48f864ab3d06e32bba818a051",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7a:91:e5:10:3e:7f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718157",
	                        "a65b6e472b2d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157: exit status 2 (315.122545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-718157 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                            │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                           │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                         │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                                      │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p auto-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-718157 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-931091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-931091 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ newest-cni-718157 image list --format=json                                                                                                                                                                                                           │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p newest-cni-718157 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:44.481140  296022 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:44.481466  296022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:44.481478  296022 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:44.481485  296022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:44.481758  296022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:44.482365  296022 out.go:368] Setting JSON to false
	I1206 09:09:44.483661  296022 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3135,"bootTime":1765009049,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:44.483724  296022 start.go:143] virtualization: kvm guest
	I1206 09:09:44.485707  296022 out.go:179] * [newest-cni-718157] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:44.487187  296022 notify.go:221] Checking for updates...
	I1206 09:09:44.487203  296022 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:44.488662  296022 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:44.489948  296022 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:44.491184  296022 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:44.495524  296022 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:44.496714  296022 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:44.498401  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:44.499006  296022 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:44.522805  296022 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:44.522907  296022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:44.578942  296022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:44.569070956 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:44.579099  296022 docker.go:319] overlay module found
	I1206 09:09:44.580885  296022 out.go:179] * Using the docker driver based on existing profile
	I1206 09:09:44.582071  296022 start.go:309] selected driver: docker
	I1206 09:09:44.582087  296022 start.go:927] validating driver "docker" against &{Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:44.582189  296022 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:44.582819  296022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:44.639949  296022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:44.63090625 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:44.640283  296022 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:09:44.640317  296022 cni.go:84] Creating CNI manager for ""
	I1206 09:09:44.640364  296022 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:44.640416  296022 start.go:353] cluster config:
	{Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:44.642383  296022 out.go:179] * Starting "newest-cni-718157" primary control-plane node in "newest-cni-718157" cluster
	I1206 09:09:44.643675  296022 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:44.644874  296022 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:44.665443  289573 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:09:44.665509  289573 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:09:44.665628  289573 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:09:44.665737  289573 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:09:44.665795  289573 kubeadm.go:319] OS: Linux
	I1206 09:09:44.665865  289573 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:09:44.665947  289573 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:09:44.666038  289573 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:09:44.666121  289573 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:09:44.666203  289573 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:09:44.666279  289573 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:09:44.666375  289573 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:09:44.666422  289573 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:09:44.666516  289573 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:09:44.666665  289573 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:09:44.666808  289573 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:09:44.666905  289573 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:09:44.668615  289573 out.go:252]   - Generating certificates and keys ...
	I1206 09:09:44.668699  289573 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:09:44.668782  289573 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:09:44.668875  289573 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:09:44.668944  289573 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:09:44.669035  289573 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:09:44.669078  289573 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:09:44.669125  289573 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:09:44.669225  289573 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:09:44.669293  289573 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:09:44.669461  289573 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:09:44.669553  289573 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:09:44.669624  289573 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:09:44.669662  289573 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:09:44.669708  289573 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:09:44.669749  289573 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:09:44.669799  289573 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:09:44.669857  289573 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:09:44.669915  289573 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:09:44.669977  289573 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:09:44.670071  289573 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:09:44.670126  289573 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:09:44.671573  289573 out.go:252]   - Booting up control plane ...
	I1206 09:09:44.671656  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:09:44.671732  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:09:44.671819  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:09:44.671928  289573 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:09:44.672037  289573 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:09:44.672121  289573 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:09:44.672242  289573 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:09:44.672304  289573 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:09:44.672443  289573 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:09:44.672574  289573 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:09:44.672667  289573 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.699798ms
	I1206 09:09:44.672783  289573 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:09:44.672905  289573 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:09:44.673043  289573 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:09:44.673142  289573 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:09:44.673228  289573 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.714776717s
	I1206 09:09:44.673318  289573 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.957962814s
	I1206 09:09:44.673404  289573 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501917325s
	I1206 09:09:44.673513  289573 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:09:44.673614  289573 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:09:44.673666  289573 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:09:44.673822  289573 kubeadm.go:319] [mark-control-plane] Marking the node auto-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:09:44.673867  289573 kubeadm.go:319] [bootstrap-token] Using token: sx7844.6ut2unu1ekbq276s
	I1206 09:09:44.675185  289573 out.go:252]   - Configuring RBAC rules ...
	I1206 09:09:44.675273  289573 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:09:44.675339  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:09:44.675487  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:09:44.675648  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:09:44.675801  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:09:44.675911  289573 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:09:44.676080  289573 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:09:44.676143  289573 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:09:44.676210  289573 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:09:44.676228  289573 kubeadm.go:319] 
	I1206 09:09:44.676334  289573 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:09:44.676344  289573 kubeadm.go:319] 
	I1206 09:09:44.676456  289573 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:09:44.676465  289573 kubeadm.go:319] 
	I1206 09:09:44.676504  289573 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:09:44.676580  289573 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:09:44.676642  289573 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:09:44.676666  289573 kubeadm.go:319] 
	I1206 09:09:44.676741  289573 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:09:44.676752  289573 kubeadm.go:319] 
	I1206 09:09:44.676806  289573 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:09:44.676815  289573 kubeadm.go:319] 
	I1206 09:09:44.676867  289573 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:09:44.676950  289573 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:09:44.677071  289573 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:09:44.677086  289573 kubeadm.go:319] 
	I1206 09:09:44.677190  289573 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:09:44.677289  289573 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:09:44.677299  289573 kubeadm.go:319] 
	I1206 09:09:44.677408  289573 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sx7844.6ut2unu1ekbq276s \
	I1206 09:09:44.677524  289573 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:09:44.677546  289573 kubeadm.go:319] 	--control-plane 
	I1206 09:09:44.677559  289573 kubeadm.go:319] 
	I1206 09:09:44.677679  289573 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:09:44.677701  289573 kubeadm.go:319] 
	I1206 09:09:44.677804  289573 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sx7844.6ut2unu1ekbq276s \
	I1206 09:09:44.677974  289573 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:09:44.678025  289573 cni.go:84] Creating CNI manager for ""
	I1206 09:09:44.678041  289573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:44.680203  289573 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:09:44.645971  296022 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:44.646014  296022 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:44.646027  296022 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:44.646085  296022 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:44.646135  296022 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:44.646151  296022 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:09:44.646240  296022 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/config.json ...
	I1206 09:09:44.668198  296022 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:44.668220  296022 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:44.668234  296022 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:44.668262  296022 start.go:360] acquireMachinesLock for newest-cni-718157: {Name:mkd215ec128fd4b5f2323afe6abf6121f194a6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:44.668314  296022 start.go:364] duration metric: took 35.42µs to acquireMachinesLock for "newest-cni-718157"
	I1206 09:09:44.668331  296022 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:09:44.668339  296022 fix.go:54] fixHost starting: 
	I1206 09:09:44.668544  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:44.690123  296022 fix.go:112] recreateIfNeeded on newest-cni-718157: state=Stopped err=<nil>
	W1206 09:09:44.690153  296022 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:09:44.681497  289573 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:09:44.687346  289573 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:09:44.687367  289573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:09:44.702056  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:09:44.964321  289573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:09:44.964415  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:44.964438  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-646473 minikube.k8s.io/updated_at=2025_12_06T09_09_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=auto-646473 minikube.k8s.io/primary=true
	I1206 09:09:44.976722  289573 ops.go:34] apiserver oom_adj: -16
	I1206 09:09:45.056190  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:45.556870  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1206 09:09:43.614410  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:46.114391  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:09:44.691658  296022 out.go:252] * Restarting existing docker container for "newest-cni-718157" ...
	I1206 09:09:44.691729  296022 cli_runner.go:164] Run: docker start newest-cni-718157
	I1206 09:09:44.976968  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:45.002757  296022 kic.go:430] container "newest-cni-718157" state is running.
	I1206 09:09:45.003271  296022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:45.029215  296022 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/config.json ...
	I1206 09:09:45.029493  296022 machine.go:94] provisionDockerMachine start ...
	I1206 09:09:45.029581  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:45.054494  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:45.054851  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:45.054883  296022 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:09:45.055595  296022 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44698->127.0.0.1:33098: read: connection reset by peer
	I1206 09:09:48.193747  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-718157
	
	I1206 09:09:48.193772  296022 ubuntu.go:182] provisioning hostname "newest-cni-718157"
	I1206 09:09:48.193819  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.213223  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:48.213419  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:48.213432  296022 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-718157 && echo "newest-cni-718157" | sudo tee /etc/hostname
	I1206 09:09:48.358883  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-718157
	
	I1206 09:09:48.358998  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.380018  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:48.380271  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:48.380299  296022 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718157' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718157/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718157' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:09:48.510773  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:09:48.510799  296022 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:09:48.510821  296022 ubuntu.go:190] setting up certificates
	I1206 09:09:48.510833  296022 provision.go:84] configureAuth start
	I1206 09:09:48.510890  296022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:48.531779  296022 provision.go:143] copyHostCerts
	I1206 09:09:48.531839  296022 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:09:48.531853  296022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:09:48.531925  296022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:09:48.532111  296022 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:09:48.532124  296022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:09:48.532166  296022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:09:48.532265  296022 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:09:48.532276  296022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:09:48.532313  296022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:09:48.532407  296022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718157 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-718157]
	I1206 09:09:48.549254  296022 provision.go:177] copyRemoteCerts
	I1206 09:09:48.549315  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:09:48.549352  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.570644  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:48.679755  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:09:48.702654  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:09:48.722373  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:09:48.744812  296022 provision.go:87] duration metric: took 233.965951ms to configureAuth
	I1206 09:09:48.744841  296022 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:09:48.745050  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:48.745176  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.765668  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:48.765918  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:48.765951  296022 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:09:49.081306  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:09:49.081343  296022 machine.go:97] duration metric: took 4.051829103s to provisionDockerMachine
	I1206 09:09:49.081358  296022 start.go:293] postStartSetup for "newest-cni-718157" (driver="docker")
	I1206 09:09:49.081372  296022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:09:49.081460  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:09:49.081514  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.104013  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:49.204014  296022 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:09:49.208223  296022 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:09:49.208257  296022 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:09:49.208269  296022 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:09:49.208333  296022 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:09:49.208449  296022 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:09:49.208567  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:09:49.217126  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:49.236720  296022 start.go:296] duration metric: took 155.347202ms for postStartSetup
	I1206 09:09:49.236811  296022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:09:49.236860  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.256904  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:49.350271  296022 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:09:49.354952  296022 fix.go:56] duration metric: took 4.686607408s for fixHost
	I1206 09:09:49.354980  296022 start.go:83] releasing machines lock for "newest-cni-718157", held for 4.686654808s
	I1206 09:09:49.355079  296022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:49.381360  296022 ssh_runner.go:195] Run: cat /version.json
	I1206 09:09:49.381427  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.381433  296022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:09:49.381509  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.407732  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:49.408475  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:46.056911  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:46.557128  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:47.056939  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:47.557051  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:48.056906  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:48.556541  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:49.057195  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:49.557093  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:49.632714  289573 kubeadm.go:1114] duration metric: took 4.668365798s to wait for elevateKubeSystemPrivileges
	I1206 09:09:49.632762  289573 kubeadm.go:403] duration metric: took 17.176079981s to StartCluster
	I1206 09:09:49.632785  289573 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:49.632854  289573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:49.634711  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:49.634981  289573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:09:49.635017  289573 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:49.635076  289573 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:49.635179  289573 addons.go:70] Setting storage-provisioner=true in profile "auto-646473"
	I1206 09:09:49.635197  289573 addons.go:239] Setting addon storage-provisioner=true in "auto-646473"
	I1206 09:09:49.635196  289573 config.go:182] Loaded profile config "auto-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:49.635224  289573 host.go:66] Checking if "auto-646473" exists ...
	I1206 09:09:49.635242  289573 addons.go:70] Setting default-storageclass=true in profile "auto-646473"
	I1206 09:09:49.635257  289573 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-646473"
	I1206 09:09:49.635611  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:49.635752  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:49.636972  289573 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:49.639244  289573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:49.662131  289573 addons.go:239] Setting addon default-storageclass=true in "auto-646473"
	I1206 09:09:49.662170  289573 host.go:66] Checking if "auto-646473" exists ...
	I1206 09:09:49.662553  289573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:49.662559  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:49.663932  289573 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:49.663955  289573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:49.664039  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:49.697386  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:49.698169  289573 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:49.698258  289573 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:49.698364  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:49.724292  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:49.748731  289573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:09:49.793893  289573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:49.834871  289573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:49.853175  289573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:49.978966  289573 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:09:49.979206  289573 node_ready.go:35] waiting up to 15m0s for node "auto-646473" to be "Ready" ...
	I1206 09:09:50.186175  289573 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:09:49.506106  296022 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:49.564976  296022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:09:49.607262  296022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:09:49.612750  296022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:09:49.612825  296022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:09:49.623142  296022 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:09:49.623167  296022 start.go:496] detecting cgroup driver to use...
	I1206 09:09:49.623201  296022 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:09:49.623244  296022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:09:49.642423  296022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:09:49.659174  296022 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:09:49.659240  296022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:09:49.680635  296022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:09:49.701591  296022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:09:49.823035  296022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:09:49.941289  296022 docker.go:234] disabling docker service ...
	I1206 09:09:49.941378  296022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:09:49.966632  296022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:09:49.985481  296022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:09:50.093327  296022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:09:50.199794  296022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:09:50.214917  296022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:09:50.230071  296022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:09:50.230153  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.239752  296022 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:09:50.239816  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.250843  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.260028  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.269226  296022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:09:50.277498  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.286828  296022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.295762  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.304518  296022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:09:50.312111  296022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:09:50.320016  296022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:50.407336  296022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:09:50.541756  296022 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:09:50.541835  296022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:09:50.547092  296022 start.go:564] Will wait 60s for crictl version
	I1206 09:09:50.547147  296022 ssh_runner.go:195] Run: which crictl
	I1206 09:09:50.551496  296022 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:09:50.581423  296022 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:09:50.581523  296022 ssh_runner.go:195] Run: crio --version
	I1206 09:09:50.616541  296022 ssh_runner.go:195] Run: crio --version
	I1206 09:09:50.654230  296022 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 09:09:50.656217  296022 cli_runner.go:164] Run: docker network inspect newest-cni-718157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:50.677490  296022 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:50.681685  296022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:50.695466  296022 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:09:50.187575  289573 addons.go:530] duration metric: took 552.492878ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:09:50.483121  289573 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-646473" context rescaled to 1 replicas
	I1206 09:09:50.696800  296022 kubeadm.go:884] updating cluster {Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:50.696973  296022 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:50.697123  296022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:50.729640  296022 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:50.729659  296022 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:50.729719  296022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:50.757683  296022 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:50.757705  296022 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:50.757712  296022 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:09:50.757807  296022 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718157 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:50.757866  296022 ssh_runner.go:195] Run: crio config
	I1206 09:09:50.804862  296022 cni.go:84] Creating CNI manager for ""
	I1206 09:09:50.804882  296022 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:50.804895  296022 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:09:50.804920  296022 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718157 NodeName:newest-cni-718157 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:50.805077  296022 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718157"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:50.805153  296022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:09:50.813446  296022 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:50.813540  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:50.821772  296022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:09:50.835196  296022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:09:50.849236  296022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1206 09:09:50.862909  296022 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:50.866585  296022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:50.876487  296022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:50.956608  296022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:50.976830  296022 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157 for IP: 192.168.94.2
	I1206 09:09:50.976858  296022 certs.go:195] generating shared ca certs ...
	I1206 09:09:50.976881  296022 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:50.977046  296022 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:50.977087  296022 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:50.977097  296022 certs.go:257] generating profile certs ...
	I1206 09:09:50.977202  296022 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.key
	I1206 09:09:50.977251  296022 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key.5210bb9f
	I1206 09:09:50.977288  296022 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key
	I1206 09:09:50.977393  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:50.977423  296022 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:50.977432  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:50.977456  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:50.977479  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:50.977503  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:50.977545  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:50.978216  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:51.001611  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:51.024773  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:51.045549  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:51.070962  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:09:51.090925  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:09:51.108198  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:51.125934  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:09:51.144195  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:51.161572  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:51.179489  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:51.198308  296022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:51.210776  296022 ssh_runner.go:195] Run: openssl version
	I1206 09:09:51.216996  296022 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.224295  296022 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:51.231953  296022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.235773  296022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.235826  296022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.273021  296022 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:51.280876  296022 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.288553  296022 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:51.296073  296022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.299978  296022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.300039  296022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.335492  296022 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:51.343469  296022 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.351030  296022 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:51.358333  296022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.361940  296022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.362009  296022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.396742  296022 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:51.404443  296022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:51.408375  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:09:51.445955  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:09:51.481431  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:09:51.525202  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:09:51.570287  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:09:51.619039  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:09:51.674430  296022 kubeadm.go:401] StartCluster: {Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:51.674707  296022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:51.674780  296022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:51.705381  296022 cri.go:89] found id: "edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00"
	I1206 09:09:51.705405  296022 cri.go:89] found id: "34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1"
	I1206 09:09:51.705411  296022 cri.go:89] found id: "253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f"
	I1206 09:09:51.705415  296022 cri.go:89] found id: "5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f"
	I1206 09:09:51.705420  296022 cri.go:89] found id: ""
	I1206 09:09:51.705475  296022 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:09:51.717673  296022 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:51Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:51.717756  296022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:51.725831  296022 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:09:51.725849  296022 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:09:51.725899  296022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:09:51.733937  296022 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:09:51.735237  296022 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-718157" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:51.736140  296022 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-5617/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-718157" cluster setting kubeconfig missing "newest-cni-718157" context setting]
	I1206 09:09:51.737494  296022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:51.739728  296022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:09:51.748879  296022 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1206 09:09:51.748914  296022 kubeadm.go:602] duration metric: took 23.059392ms to restartPrimaryControlPlane
	I1206 09:09:51.748924  296022 kubeadm.go:403] duration metric: took 74.50445ms to StartCluster
	I1206 09:09:51.748942  296022 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:51.749022  296022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:51.751464  296022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:51.751731  296022 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:51.751823  296022 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:51.751919  296022 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-718157"
	I1206 09:09:51.751938  296022 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-718157"
	I1206 09:09:51.751938  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	W1206 09:09:51.751946  296022 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:09:51.751961  296022 addons.go:70] Setting dashboard=true in profile "newest-cni-718157"
	I1206 09:09:51.751976  296022 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:51.752002  296022 addons.go:239] Setting addon dashboard=true in "newest-cni-718157"
	W1206 09:09:51.752012  296022 addons.go:248] addon dashboard should already be in state true
	I1206 09:09:51.752020  296022 addons.go:70] Setting default-storageclass=true in profile "newest-cni-718157"
	I1206 09:09:51.752047  296022 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:51.752048  296022 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718157"
	I1206 09:09:51.752360  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.752505  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.752586  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.753868  296022 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:51.755853  296022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:51.778554  296022 addons.go:239] Setting addon default-storageclass=true in "newest-cni-718157"
	W1206 09:09:51.778580  296022 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:09:51.778607  296022 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:51.779086  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.779424  296022 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:51.779439  296022 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:09:51.780756  296022 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:51.780774  296022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:51.780826  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:51.780871  296022 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1206 09:09:48.115076  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:50.614950  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:09:51.782282  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:09:51.782301  296022 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:09:51.782355  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:51.807583  296022 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:51.807609  296022 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:51.807669  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:51.821708  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:51.822830  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:51.840355  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:51.896324  296022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:51.908922  296022 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:09:51.909003  296022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:09:51.921037  296022 api_server.go:72] duration metric: took 169.274238ms to wait for apiserver process to appear ...
	I1206 09:09:51.921063  296022 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:09:51.921082  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:51.928431  296022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:51.929502  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:09:51.929521  296022 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:09:51.944738  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:09:51.944763  296022 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:09:51.945026  296022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:51.959261  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:09:51.959283  296022 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:09:51.976090  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:09:51.976113  296022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:09:51.993283  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:09:51.993308  296022 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:09:52.007375  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:09:52.007402  296022 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:09:52.020021  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:09:52.020043  296022 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:09:52.032376  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:09:52.032394  296022 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:09:52.045201  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:09:52.045225  296022 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:09:52.058525  296022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:09:53.311509  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 09:09:53.311539  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 09:09:53.311554  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:53.362357  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 09:09:53.362399  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 09:09:53.421619  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:53.426200  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:09:53.426232  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:09:53.867390  296022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.938928215s)
	I1206 09:09:53.867470  296022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.922413934s)
	I1206 09:09:53.867604  296022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.80904183s)
	I1206 09:09:53.869217  296022 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-718157 addons enable metrics-server
	
	I1206 09:09:53.878801  296022 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:09:53.880378  296022 addons.go:530] duration metric: took 2.128562509s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:09:53.921778  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:53.926604  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:09:53.926626  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:09:54.422145  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:54.427281  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:09:54.427311  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:09:54.922060  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:54.926390  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:09:54.927406  296022 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:09:54.927428  296022 api_server.go:131] duration metric: took 3.0063594s to wait for apiserver health ...
	I1206 09:09:54.927436  296022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:09:54.931163  296022 system_pods.go:59] 8 kube-system pods found
	I1206 09:09:54.931191  296022 system_pods.go:61] "coredns-7d764666f9-4xnvs" [56b811f4-2c33-47ae-a18e-91bf00c91dda] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:54.931203  296022 system_pods.go:61] "etcd-newest-cni-718157" [3d942387-01d6-4fd9-a474-258befcbde87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:09:54.931213  296022 system_pods.go:61] "kindnet-6q6w2" [740bbe6b-e50c-4cf4-b593-5f871820515c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:09:54.931224  296022 system_pods.go:61] "kube-apiserver-newest-cni-718157" [488d8c89-4121-4c74-9433-c14123aa9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:09:54.931233  296022 system_pods.go:61] "kube-controller-manager-newest-cni-718157" [f5fd9d31-9322-4da5-8e82-8e20ae26ca00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:09:54.931240  296022 system_pods.go:61] "kube-proxy-46zxv" [13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:09:54.931252  296022 system_pods.go:61] "kube-scheduler-newest-cni-718157" [a256efa5-856a-4103-b3b9-397143dc1894] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:09:54.931259  296022 system_pods.go:61] "storage-provisioner" [72d40874-81fd-421f-95e4-7f8b2380f340] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:54.931275  296022 system_pods.go:74] duration metric: took 3.832886ms to wait for pod list to return data ...
	I1206 09:09:54.931284  296022 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:09:54.933640  296022 default_sa.go:45] found service account: "default"
	I1206 09:09:54.933660  296022 default_sa.go:55] duration metric: took 2.368927ms for default service account to be created ...
	I1206 09:09:54.933672  296022 kubeadm.go:587] duration metric: took 3.181913749s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:09:54.933685  296022 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:09:54.935802  296022 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:09:54.935821  296022 node_conditions.go:123] node cpu capacity is 8
	I1206 09:09:54.935836  296022 node_conditions.go:105] duration metric: took 2.143409ms to run NodePressure ...
	I1206 09:09:54.935847  296022 start.go:242] waiting for startup goroutines ...
	I1206 09:09:54.935853  296022 start.go:247] waiting for cluster config update ...
	I1206 09:09:54.935865  296022 start.go:256] writing updated cluster config ...
	I1206 09:09:54.936132  296022 ssh_runner.go:195] Run: rm -f paused
	I1206 09:09:54.985331  296022 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:09:54.987519  296022 out.go:179] * Done! kubectl is now configured to use "newest-cni-718157" cluster and "default" namespace by default
	W1206 09:09:51.983502  289573 node_ready.go:57] node "auto-646473" has "Ready":"False" status (will retry)
	W1206 09:09:54.482670  289573 node_ready.go:57] node "auto-646473" has "Ready":"False" status (will retry)
	W1206 09:09:53.114206  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:55.114560  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:57.114607  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.35942046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.361838935Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a3c7df49-110c-4a9a-85d8-1068ac835dd2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.364107443Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.364667648Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f5b32157-d19d-414c-af7c-522ff98738e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.36491282Z" level=info msg="Ran pod sandbox 2f5ba92dd965f359d6e69887b9bce0bfaf741d96d2e4c5f31f1c7271b75204d5 with infra container: kube-system/kube-proxy-46zxv/POD" id=a3c7df49-110c-4a9a-85d8-1068ac835dd2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.366197166Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=77ef5b97-af3f-4dbd-8aad-233307afc036 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.366324202Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.367229077Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5edf4081-feb8-45a5-b72a-bb4b8f19719c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.367239305Z" level=info msg="Ran pod sandbox 5a5f819d937dc300f1768a956dc0dff3b684c19ae9bd759134b934429ba3f1ea with infra container: kube-system/kindnet-6q6w2/POD" id=f5b32157-d19d-414c-af7c-522ff98738e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.368154943Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ff2bffaa-a25f-4aa1-bc3c-2071f2d20080 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.368313037Z" level=info msg="Creating container: kube-system/kube-proxy-46zxv/kube-proxy" id=c028d177-b818-4538-a694-1369abcbc575 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.368425396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.369017838Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6124d1d5-0f28-486b-be08-7405cdf1276a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.372210513Z" level=info msg="Creating container: kube-system/kindnet-6q6w2/kindnet-cni" id=01c1395e-54e2-4a37-ba67-9211bb1f7d3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.37247885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.373863303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.374633658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.376973646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.377668704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.401684113Z" level=info msg="Created container c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb: kube-system/kindnet-6q6w2/kindnet-cni" id=01c1395e-54e2-4a37-ba67-9211bb1f7d3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.402406098Z" level=info msg="Starting container: c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb" id=69f0b9f6-161e-4da7-a402-d31c35591bd9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.404981027Z" level=info msg="Started container" PID=1050 containerID=c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb description=kube-system/kindnet-6q6w2/kindnet-cni id=69f0b9f6-161e-4da7-a402-d31c35591bd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a5f819d937dc300f1768a956dc0dff3b684c19ae9bd759134b934429ba3f1ea
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.405448924Z" level=info msg="Created container fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd: kube-system/kube-proxy-46zxv/kube-proxy" id=c028d177-b818-4538-a694-1369abcbc575 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.405934201Z" level=info msg="Starting container: fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd" id=efbae18f-9f3a-4067-a7ca-b39fb8bd6a36 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.409296764Z" level=info msg="Started container" PID=1049 containerID=fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd description=kube-system/kube-proxy-46zxv/kube-proxy id=efbae18f-9f3a-4067-a7ca-b39fb8bd6a36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f5ba92dd965f359d6e69887b9bce0bfaf741d96d2e4c5f31f1c7271b75204d5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c27d3065b3fdb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   5a5f819d937dc       kindnet-6q6w2                               kube-system
	fa498972878bf       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   2f5ba92dd965f       kube-proxy-46zxv                            kube-system
	edfd807dab22f       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   4faee54531bb0       kube-scheduler-newest-cni-718157            kube-system
	34fd2947eabbb       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   071cdc19481eb       kube-controller-manager-newest-cni-718157   kube-system
	253464fe728f5       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   42f9de2cfda1d       kube-apiserver-newest-cni-718157            kube-system
	5b265fccde521       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   ea9c259cc0a59       etcd-newest-cni-718157                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718157
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-718157
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=newest-cni-718157
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718157
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:09:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-718157
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ec2caef7-c7e1-47e9-abcb-e0e0655dbe92
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718157                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-6q6w2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-718157             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-718157    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-46zxv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-718157             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21s   node-controller  Node newest-cni-718157 event: Registered Node newest-cni-718157 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-718157 event: Registered Node newest-cni-718157 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f] <==
	{"level":"warn","ts":"2025-12-06T09:09:52.694692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.701353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.708380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.730661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.741495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.747841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.754281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.760457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.767563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.773697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.787116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.807814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.814173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.820663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.826862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.833213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.839466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.845620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.851624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.857817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.864519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.877090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.883718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.889928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.896038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:58 up 52 min,  0 user,  load average: 5.11, 2.99, 1.97
	Linux newest-cni-718157 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb] <==
	I1206 09:09:54.628565       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:09:54.628847       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:09:54.628984       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:09:54.629020       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:09:54.629051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:09:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:09:54.833053       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:09:54.849795       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:09:54.849860       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:09:54.928824       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:09:55.249975       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:09:55.250020       1 metrics.go:72] Registering metrics
	I1206 09:09:55.250083       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f] <==
	I1206 09:09:53.406235       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:53.406149       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:09:53.406255       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:09:53.406279       1 aggregator.go:187] initial CRD sync complete...
	I1206 09:09:53.406287       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:09:53.406291       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:09:53.406297       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:09:53.406156       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:09:53.410221       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:53.410241       1 policy_source.go:248] refreshing policies
	E1206 09:09:53.412963       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:09:53.413855       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:09:53.448911       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:09:53.677310       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:09:53.703310       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:09:53.721581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:09:53.728538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:09:53.737648       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:09:53.769807       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.172.17"}
	I1206 09:09:53.781169       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.249.223"}
	I1206 09:09:54.309137       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:09:56.991140       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:09:57.042265       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:09:57.140578       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:09:57.241588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1] <==
	I1206 09:09:56.544234       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.544292       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.544863       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546244       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546295       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546307       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546311       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546262       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546276       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546282       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546273       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546288       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546377       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:09:56.546288       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546307       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546275       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546446       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-718157"
	I1206 09:09:56.546262       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546542       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:09:56.549814       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.553846       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:56.646259       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.646283       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:09:56.646290       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:09:56.653955       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd] <==
	I1206 09:09:54.448460       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:09:54.522800       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:54.623266       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:54.623296       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:09:54.623375       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:09:54.641822       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:09:54.641885       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:09:54.648005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:09:54.648352       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:09:54.648373       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:54.650345       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:09:54.650369       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:09:54.650414       1 config.go:200] "Starting service config controller"
	I1206 09:09:54.650421       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:09:54.651227       1 config.go:309] "Starting node config controller"
	I1206 09:09:54.651251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:09:54.651262       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:09:54.651280       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:09:54.651269       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:09:54.750576       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:09:54.750598       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:09:54.751919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00] <==
	I1206 09:09:51.824916       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:09:53.337840       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:09:53.337905       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:09:53.337918       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:09:53.337928       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:09:53.365530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:09:53.365564       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:53.367714       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:09:53.367747       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:53.367847       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:09:53.368254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:09:53.468605       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.470480     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.476420     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718157\" already exists" pod="kube-system/kube-apiserver-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.476456     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.482042     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718157\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.482076     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.487660     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-718157\" already exists" pod="kube-system/kube-scheduler-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.713548     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.719529     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718157\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.719654     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-718157" containerName="kube-controller-manager"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.049322     670 apiserver.go:52] "Watching apiserver"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.056396     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094323     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-718157" containerName="kube-apiserver"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094518     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-718157" containerName="kube-controller-manager"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094671     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094795     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-718157" containerName="etcd"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146692     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-xtables-lock\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146733     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-cni-cfg\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146894     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-xtables-lock\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146940     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-lib-modules\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.147046     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-lib-modules\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:55 newest-cni-718157 kubelet[670]: E1206 09:09:55.166266     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:55 newest-cni-718157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:09:55 newest-cni-718157 kubelet[670]: I1206 09:09:55.951883     670 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:09:55 newest-cni-718157 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:09:55 newest-cni-718157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718157 -n newest-cni-718157
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718157 -n newest-cni-718157: exit status 2 (320.544047ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718157 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb: exit status 1 (62.923457ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4xnvs" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6r77h" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-pm4bb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718157
helpers_test.go:243: (dbg) docker inspect newest-cni-718157:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c",
	        "Created": "2025-12-06T09:09:19.234709377Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:09:44.719667223Z",
	            "FinishedAt": "2025-12-06T09:09:43.82048629Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/hosts",
	        "LogPath": "/var/lib/docker/containers/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c/a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c-json.log",
	        "Name": "/newest-cni-718157",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718157:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-718157",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a65b6e472b2dd1c76af54beba9ed96effd99cce05bd5ee2cc530f30f4f0e5c7c",
	                "LowerDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb6da408c88d822c992c73fb8c9fd373b36c8d5884f1e912a1f275680e887142/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718157",
	                "Source": "/var/lib/docker/volumes/newest-cni-718157/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718157",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718157",
	                "name.minikube.sigs.k8s.io": "newest-cni-718157",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "68f0a8081441821afe032e87edc48aa154d77426b4d82bb9b489b39aa91c26a9",
	            "SandboxKey": "/var/run/docker/netns/68f0a8081441",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-718157": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50d0f2baf000bc1c263e721b7068e9545be54f5ae74e0afeafff76b764fd61ec",
	                    "EndpointID": "2d3bdf264e845355f19d4edb3418a8b4921e3fd48f864ab3d06e32bba818a051",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7a:91:e5:10:3e:7f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718157",
	                        "a65b6e472b2d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157: exit status 2 (311.806978ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-718157 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p stopped-upgrade-454433                                                                                                                                                                                                                            │ stopped-upgrade-454433       │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:08 UTC │
	│ start   │ -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	│ image   │ no-preload-769733 image list --format=json                                                                                                                                                                                                           │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p no-preload-769733 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-702638                                                                                                                                                                                                                         │ kubernetes-upgrade-702638    │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-217626                                                                                                                                                                                                                      │ disable-driver-mounts-217626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ old-k8s-version-322324 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p old-k8s-version-322324 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p no-preload-769733                                                                                                                                                                                                                                 │ no-preload-769733            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ delete  │ -p old-k8s-version-322324                                                                                                                                                                                                                            │ old-k8s-version-322324       │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p auto-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-718157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-718157 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ start   │ -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-931091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-931091 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-931091           │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	│ image   │ newest-cni-718157 image list --format=json                                                                                                                                                                                                           │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │ 06 Dec 25 09:09 UTC │
	│ pause   │ -p newest-cni-718157 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-718157            │ jenkins │ v1.37.0 │ 06 Dec 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:09:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:09:44.481140  296022 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:09:44.481466  296022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:44.481478  296022 out.go:374] Setting ErrFile to fd 2...
	I1206 09:09:44.481485  296022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:09:44.481758  296022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:09:44.482365  296022 out.go:368] Setting JSON to false
	I1206 09:09:44.483661  296022 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3135,"bootTime":1765009049,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:09:44.483724  296022 start.go:143] virtualization: kvm guest
	I1206 09:09:44.485707  296022 out.go:179] * [newest-cni-718157] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:09:44.487187  296022 notify.go:221] Checking for updates...
	I1206 09:09:44.487203  296022 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:09:44.488662  296022 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:09:44.489948  296022 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:44.491184  296022 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:09:44.495524  296022 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:09:44.496714  296022 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:09:44.498401  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:44.499006  296022 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:09:44.522805  296022 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:09:44.522907  296022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:44.578942  296022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:44.569070956 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:44.579099  296022 docker.go:319] overlay module found
	I1206 09:09:44.580885  296022 out.go:179] * Using the docker driver based on existing profile
	I1206 09:09:44.582071  296022 start.go:309] selected driver: docker
	I1206 09:09:44.582087  296022 start.go:927] validating driver "docker" against &{Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:44.582189  296022 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:09:44.582819  296022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:09:44.639949  296022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:09:44.63090625 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:09:44.640283  296022 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:09:44.640317  296022 cni.go:84] Creating CNI manager for ""
	I1206 09:09:44.640364  296022 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:44.640416  296022 start.go:353] cluster config:
	{Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:44.642383  296022 out.go:179] * Starting "newest-cni-718157" primary control-plane node in "newest-cni-718157" cluster
	I1206 09:09:44.643675  296022 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:09:44.644874  296022 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:09:44.665443  289573 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:09:44.665509  289573 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:09:44.665628  289573 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:09:44.665737  289573 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:09:44.665795  289573 kubeadm.go:319] OS: Linux
	I1206 09:09:44.665865  289573 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:09:44.665947  289573 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:09:44.666038  289573 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:09:44.666121  289573 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:09:44.666203  289573 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:09:44.666279  289573 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:09:44.666375  289573 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:09:44.666422  289573 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:09:44.666516  289573 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:09:44.666665  289573 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:09:44.666808  289573 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:09:44.666905  289573 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:09:44.668615  289573 out.go:252]   - Generating certificates and keys ...
	I1206 09:09:44.668699  289573 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:09:44.668782  289573 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:09:44.668875  289573 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:09:44.668944  289573 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:09:44.669035  289573 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:09:44.669078  289573 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:09:44.669125  289573 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:09:44.669225  289573 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:09:44.669293  289573 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:09:44.669461  289573 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:09:44.669553  289573 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:09:44.669624  289573 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:09:44.669662  289573 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:09:44.669708  289573 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:09:44.669749  289573 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:09:44.669799  289573 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:09:44.669857  289573 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:09:44.669915  289573 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:09:44.669977  289573 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:09:44.670071  289573 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:09:44.670126  289573 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:09:44.671573  289573 out.go:252]   - Booting up control plane ...
	I1206 09:09:44.671656  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:09:44.671732  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:09:44.671819  289573 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:09:44.671928  289573 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:09:44.672037  289573 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:09:44.672121  289573 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:09:44.672242  289573 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:09:44.672304  289573 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:09:44.672443  289573 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:09:44.672574  289573 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:09:44.672667  289573 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.699798ms
	I1206 09:09:44.672783  289573 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:09:44.672905  289573 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:09:44.673043  289573 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:09:44.673142  289573 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:09:44.673228  289573 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.714776717s
	I1206 09:09:44.673318  289573 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.957962814s
	I1206 09:09:44.673404  289573 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501917325s
	I1206 09:09:44.673513  289573 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:09:44.673614  289573 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:09:44.673666  289573 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:09:44.673822  289573 kubeadm.go:319] [mark-control-plane] Marking the node auto-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:09:44.673867  289573 kubeadm.go:319] [bootstrap-token] Using token: sx7844.6ut2unu1ekbq276s
	I1206 09:09:44.675185  289573 out.go:252]   - Configuring RBAC rules ...
	I1206 09:09:44.675273  289573 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:09:44.675339  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:09:44.675487  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:09:44.675648  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:09:44.675801  289573 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:09:44.675911  289573 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:09:44.676080  289573 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:09:44.676143  289573 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:09:44.676210  289573 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:09:44.676228  289573 kubeadm.go:319] 
	I1206 09:09:44.676334  289573 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:09:44.676344  289573 kubeadm.go:319] 
	I1206 09:09:44.676456  289573 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:09:44.676465  289573 kubeadm.go:319] 
	I1206 09:09:44.676504  289573 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:09:44.676580  289573 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:09:44.676642  289573 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:09:44.676666  289573 kubeadm.go:319] 
	I1206 09:09:44.676741  289573 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:09:44.676752  289573 kubeadm.go:319] 
	I1206 09:09:44.676806  289573 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:09:44.676815  289573 kubeadm.go:319] 
	I1206 09:09:44.676867  289573 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:09:44.676950  289573 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:09:44.677071  289573 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:09:44.677086  289573 kubeadm.go:319] 
	I1206 09:09:44.677190  289573 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:09:44.677289  289573 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:09:44.677299  289573 kubeadm.go:319] 
	I1206 09:09:44.677408  289573 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sx7844.6ut2unu1ekbq276s \
	I1206 09:09:44.677524  289573 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:09:44.677546  289573 kubeadm.go:319] 	--control-plane 
	I1206 09:09:44.677559  289573 kubeadm.go:319] 
	I1206 09:09:44.677679  289573 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:09:44.677701  289573 kubeadm.go:319] 
	I1206 09:09:44.677804  289573 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sx7844.6ut2unu1ekbq276s \
	I1206 09:09:44.677974  289573 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:09:44.678025  289573 cni.go:84] Creating CNI manager for ""
	I1206 09:09:44.678041  289573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:44.680203  289573 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:09:44.645971  296022 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:44.646014  296022 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:09:44.646027  296022 cache.go:65] Caching tarball of preloaded images
	I1206 09:09:44.646085  296022 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:09:44.646135  296022 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:09:44.646151  296022 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:09:44.646240  296022 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/config.json ...
	I1206 09:09:44.668198  296022 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:09:44.668220  296022 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:09:44.668234  296022 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:09:44.668262  296022 start.go:360] acquireMachinesLock for newest-cni-718157: {Name:mkd215ec128fd4b5f2323afe6abf6121f194a6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:09:44.668314  296022 start.go:364] duration metric: took 35.42µs to acquireMachinesLock for "newest-cni-718157"
	I1206 09:09:44.668331  296022 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:09:44.668339  296022 fix.go:54] fixHost starting: 
	I1206 09:09:44.668544  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:44.690123  296022 fix.go:112] recreateIfNeeded on newest-cni-718157: state=Stopped err=<nil>
	W1206 09:09:44.690153  296022 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:09:44.681497  289573 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:09:44.687346  289573 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:09:44.687367  289573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:09:44.702056  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:09:44.964321  289573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:09:44.964415  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:44.964438  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-646473 minikube.k8s.io/updated_at=2025_12_06T09_09_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=auto-646473 minikube.k8s.io/primary=true
	I1206 09:09:44.976722  289573 ops.go:34] apiserver oom_adj: -16
	I1206 09:09:45.056190  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:45.556870  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1206 09:09:43.614410  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:46.114391  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:09:44.691658  296022 out.go:252] * Restarting existing docker container for "newest-cni-718157" ...
	I1206 09:09:44.691729  296022 cli_runner.go:164] Run: docker start newest-cni-718157
	I1206 09:09:44.976968  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:45.002757  296022 kic.go:430] container "newest-cni-718157" state is running.
	I1206 09:09:45.003271  296022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:45.029215  296022 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/config.json ...
	I1206 09:09:45.029493  296022 machine.go:94] provisionDockerMachine start ...
	I1206 09:09:45.029581  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:45.054494  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:45.054851  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:45.054883  296022 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:09:45.055595  296022 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44698->127.0.0.1:33098: read: connection reset by peer
	I1206 09:09:48.193747  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-718157
	
	I1206 09:09:48.193772  296022 ubuntu.go:182] provisioning hostname "newest-cni-718157"
	I1206 09:09:48.193819  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.213223  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:48.213419  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:48.213432  296022 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-718157 && echo "newest-cni-718157" | sudo tee /etc/hostname
	I1206 09:09:48.358883  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-718157
	
	I1206 09:09:48.358998  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.380018  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:48.380271  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:48.380299  296022 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718157' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718157/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718157' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:09:48.510773  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:09:48.510799  296022 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:09:48.510821  296022 ubuntu.go:190] setting up certificates
	I1206 09:09:48.510833  296022 provision.go:84] configureAuth start
	I1206 09:09:48.510890  296022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:48.531779  296022 provision.go:143] copyHostCerts
	I1206 09:09:48.531839  296022 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:09:48.531853  296022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:09:48.531925  296022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:09:48.532111  296022 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:09:48.532124  296022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:09:48.532166  296022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:09:48.532265  296022 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:09:48.532276  296022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:09:48.532313  296022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:09:48.532407  296022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718157 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-718157]
	I1206 09:09:48.549254  296022 provision.go:177] copyRemoteCerts
	I1206 09:09:48.549315  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:09:48.549352  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.570644  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:48.679755  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:09:48.702654  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:09:48.722373  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:09:48.744812  296022 provision.go:87] duration metric: took 233.965951ms to configureAuth
	I1206 09:09:48.744841  296022 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:09:48.745050  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:09:48.745176  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:48.765668  296022 main.go:143] libmachine: Using SSH client type: native
	I1206 09:09:48.765918  296022 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 09:09:48.765951  296022 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:09:49.081306  296022 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:09:49.081343  296022 machine.go:97] duration metric: took 4.051829103s to provisionDockerMachine
	I1206 09:09:49.081358  296022 start.go:293] postStartSetup for "newest-cni-718157" (driver="docker")
	I1206 09:09:49.081372  296022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:09:49.081460  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:09:49.081514  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.104013  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:49.204014  296022 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:09:49.208223  296022 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:09:49.208257  296022 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:09:49.208269  296022 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:09:49.208333  296022 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:09:49.208449  296022 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:09:49.208567  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:09:49.217126  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:49.236720  296022 start.go:296] duration metric: took 155.347202ms for postStartSetup
	I1206 09:09:49.236811  296022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:09:49.236860  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.256904  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:49.350271  296022 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:09:49.354952  296022 fix.go:56] duration metric: took 4.686607408s for fixHost
	I1206 09:09:49.354980  296022 start.go:83] releasing machines lock for "newest-cni-718157", held for 4.686654808s
	I1206 09:09:49.355079  296022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718157
	I1206 09:09:49.381360  296022 ssh_runner.go:195] Run: cat /version.json
	I1206 09:09:49.381427  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.381433  296022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:09:49.381509  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:49.407732  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:49.408475  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:46.056911  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:46.557128  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:47.056939  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:47.557051  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:48.056906  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:48.556541  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:49.057195  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:49.557093  289573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:09:49.632714  289573 kubeadm.go:1114] duration metric: took 4.668365798s to wait for elevateKubeSystemPrivileges
	I1206 09:09:49.632762  289573 kubeadm.go:403] duration metric: took 17.176079981s to StartCluster
	I1206 09:09:49.632785  289573 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:49.632854  289573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:49.634711  289573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:49.634981  289573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:09:49.635017  289573 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:49.635076  289573 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:49.635179  289573 addons.go:70] Setting storage-provisioner=true in profile "auto-646473"
	I1206 09:09:49.635197  289573 addons.go:239] Setting addon storage-provisioner=true in "auto-646473"
	I1206 09:09:49.635196  289573 config.go:182] Loaded profile config "auto-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:09:49.635224  289573 host.go:66] Checking if "auto-646473" exists ...
	I1206 09:09:49.635242  289573 addons.go:70] Setting default-storageclass=true in profile "auto-646473"
	I1206 09:09:49.635257  289573 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-646473"
	I1206 09:09:49.635611  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:49.635752  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:49.636972  289573 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:49.639244  289573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:49.662131  289573 addons.go:239] Setting addon default-storageclass=true in "auto-646473"
	I1206 09:09:49.662170  289573 host.go:66] Checking if "auto-646473" exists ...
	I1206 09:09:49.662553  289573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:49.662559  289573 cli_runner.go:164] Run: docker container inspect auto-646473 --format={{.State.Status}}
	I1206 09:09:49.663932  289573 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:49.663955  289573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:49.664039  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:49.697386  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:49.698169  289573 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:49.698258  289573 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:49.698364  289573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646473
	I1206 09:09:49.724292  289573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/auto-646473/id_rsa Username:docker}
	I1206 09:09:49.748731  289573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:09:49.793893  289573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:49.834871  289573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:49.853175  289573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:49.978966  289573 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:09:49.979206  289573 node_ready.go:35] waiting up to 15m0s for node "auto-646473" to be "Ready" ...
	I1206 09:09:50.186175  289573 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:09:49.506106  296022 ssh_runner.go:195] Run: systemctl --version
	I1206 09:09:49.564976  296022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:09:49.607262  296022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:09:49.612750  296022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:09:49.612825  296022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:09:49.623142  296022 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:09:49.623167  296022 start.go:496] detecting cgroup driver to use...
	I1206 09:09:49.623201  296022 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:09:49.623244  296022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:09:49.642423  296022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:09:49.659174  296022 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:09:49.659240  296022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:09:49.680635  296022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:09:49.701591  296022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:09:49.823035  296022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:09:49.941289  296022 docker.go:234] disabling docker service ...
	I1206 09:09:49.941378  296022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:09:49.966632  296022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:09:49.985481  296022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:09:50.093327  296022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:09:50.199794  296022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:09:50.214917  296022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:09:50.230071  296022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:09:50.230153  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.239752  296022 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:09:50.239816  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.250843  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.260028  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.269226  296022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:09:50.277498  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.286828  296022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.295762  296022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:09:50.304518  296022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:09:50.312111  296022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:09:50.320016  296022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:50.407336  296022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:09:50.541756  296022 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:09:50.541835  296022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:09:50.547092  296022 start.go:564] Will wait 60s for crictl version
	I1206 09:09:50.547147  296022 ssh_runner.go:195] Run: which crictl
	I1206 09:09:50.551496  296022 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:09:50.581423  296022 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:09:50.581523  296022 ssh_runner.go:195] Run: crio --version
	I1206 09:09:50.616541  296022 ssh_runner.go:195] Run: crio --version
	I1206 09:09:50.654230  296022 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 09:09:50.656217  296022 cli_runner.go:164] Run: docker network inspect newest-cni-718157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:09:50.677490  296022 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:09:50.681685  296022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:50.695466  296022 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:09:50.187575  289573 addons.go:530] duration metric: took 552.492878ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:09:50.483121  289573 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-646473" context rescaled to 1 replicas
	I1206 09:09:50.696800  296022 kubeadm.go:884] updating cluster {Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:09:50.696973  296022 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:09:50.697123  296022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:50.729640  296022 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:50.729659  296022 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:09:50.729719  296022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:09:50.757683  296022 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:09:50.757705  296022 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:09:50.757712  296022 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 09:09:50.757807  296022 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718157 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:09:50.757866  296022 ssh_runner.go:195] Run: crio config
	I1206 09:09:50.804862  296022 cni.go:84] Creating CNI manager for ""
	I1206 09:09:50.804882  296022 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:09:50.804895  296022 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:09:50.804920  296022 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718157 NodeName:newest-cni-718157 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:09:50.805077  296022 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718157"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:09:50.805153  296022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:09:50.813446  296022 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:09:50.813540  296022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:09:50.821772  296022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 09:09:50.835196  296022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:09:50.849236  296022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1206 09:09:50.862909  296022 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:09:50.866585  296022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:09:50.876487  296022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:50.956608  296022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:50.976830  296022 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157 for IP: 192.168.94.2
	I1206 09:09:50.976858  296022 certs.go:195] generating shared ca certs ...
	I1206 09:09:50.976881  296022 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:50.977046  296022 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:09:50.977087  296022 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:09:50.977097  296022 certs.go:257] generating profile certs ...
	I1206 09:09:50.977202  296022 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/client.key
	I1206 09:09:50.977251  296022 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key.5210bb9f
	I1206 09:09:50.977288  296022 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key
	I1206 09:09:50.977393  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:09:50.977423  296022 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:09:50.977432  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:09:50.977456  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:09:50.977479  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:09:50.977503  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:09:50.977545  296022 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:09:50.978216  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:09:51.001611  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:09:51.024773  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:09:51.045549  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:09:51.070962  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:09:51.090925  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:09:51.108198  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:09:51.125934  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/newest-cni-718157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:09:51.144195  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:09:51.161572  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:09:51.179489  296022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:09:51.198308  296022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:09:51.210776  296022 ssh_runner.go:195] Run: openssl version
	I1206 09:09:51.216996  296022 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.224295  296022 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:09:51.231953  296022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.235773  296022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.235826  296022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:09:51.273021  296022 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:09:51.280876  296022 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.288553  296022 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:09:51.296073  296022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.299978  296022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.300039  296022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:09:51.335492  296022 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:09:51.343469  296022 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.351030  296022 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:09:51.358333  296022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.361940  296022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.362009  296022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:09:51.396742  296022 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:09:51.404443  296022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:09:51.408375  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:09:51.445955  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:09:51.481431  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:09:51.525202  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:09:51.570287  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:09:51.619039  296022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:09:51.674430  296022 kubeadm.go:401] StartCluster: {Name:newest-cni-718157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-718157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:09:51.674707  296022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:09:51.674780  296022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:09:51.705381  296022 cri.go:89] found id: "edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00"
	I1206 09:09:51.705405  296022 cri.go:89] found id: "34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1"
	I1206 09:09:51.705411  296022 cri.go:89] found id: "253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f"
	I1206 09:09:51.705415  296022 cri.go:89] found id: "5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f"
	I1206 09:09:51.705420  296022 cri.go:89] found id: ""
	I1206 09:09:51.705475  296022 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:09:51.717673  296022 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:09:51Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:09:51.717756  296022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:09:51.725831  296022 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:09:51.725849  296022 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:09:51.725899  296022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:09:51.733937  296022 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:09:51.735237  296022 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-718157" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:51.736140  296022 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-5617/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-718157" cluster setting kubeconfig missing "newest-cni-718157" context setting]
	I1206 09:09:51.737494  296022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:51.739728  296022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:09:51.748879  296022 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1206 09:09:51.748914  296022 kubeadm.go:602] duration metric: took 23.059392ms to restartPrimaryControlPlane
	I1206 09:09:51.748924  296022 kubeadm.go:403] duration metric: took 74.50445ms to StartCluster
	I1206 09:09:51.748942  296022 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:51.749022  296022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:09:51.751464  296022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:09:51.751731  296022 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:09:51.751823  296022 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:09:51.751919  296022 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-718157"
	I1206 09:09:51.751938  296022 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-718157"
	I1206 09:09:51.751938  296022 config.go:182] Loaded profile config "newest-cni-718157": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	W1206 09:09:51.751946  296022 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:09:51.751961  296022 addons.go:70] Setting dashboard=true in profile "newest-cni-718157"
	I1206 09:09:51.751976  296022 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:51.752002  296022 addons.go:239] Setting addon dashboard=true in "newest-cni-718157"
	W1206 09:09:51.752012  296022 addons.go:248] addon dashboard should already be in state true
	I1206 09:09:51.752020  296022 addons.go:70] Setting default-storageclass=true in profile "newest-cni-718157"
	I1206 09:09:51.752047  296022 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:51.752048  296022 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718157"
	I1206 09:09:51.752360  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.752505  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.752586  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.753868  296022 out.go:179] * Verifying Kubernetes components...
	I1206 09:09:51.755853  296022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:09:51.778554  296022 addons.go:239] Setting addon default-storageclass=true in "newest-cni-718157"
	W1206 09:09:51.778580  296022 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:09:51.778607  296022 host.go:66] Checking if "newest-cni-718157" exists ...
	I1206 09:09:51.779086  296022 cli_runner.go:164] Run: docker container inspect newest-cni-718157 --format={{.State.Status}}
	I1206 09:09:51.779424  296022 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:09:51.779439  296022 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:09:51.780756  296022 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:51.780774  296022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:09:51.780826  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:51.780871  296022 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1206 09:09:48.115076  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:50.614950  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:09:51.782282  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:09:51.782301  296022 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:09:51.782355  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:51.807583  296022 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:51.807609  296022 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:09:51.807669  296022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718157
	I1206 09:09:51.821708  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:51.822830  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:51.840355  296022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/newest-cni-718157/id_rsa Username:docker}
	I1206 09:09:51.896324  296022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:09:51.908922  296022 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:09:51.909003  296022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:09:51.921037  296022 api_server.go:72] duration metric: took 169.274238ms to wait for apiserver process to appear ...
	I1206 09:09:51.921063  296022 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:09:51.921082  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:51.928431  296022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:09:51.929502  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:09:51.929521  296022 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:09:51.944738  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:09:51.944763  296022 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:09:51.945026  296022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:09:51.959261  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:09:51.959283  296022 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:09:51.976090  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:09:51.976113  296022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:09:51.993283  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:09:51.993308  296022 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:09:52.007375  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:09:52.007402  296022 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:09:52.020021  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:09:52.020043  296022 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:09:52.032376  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:09:52.032394  296022 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:09:52.045201  296022 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:09:52.045225  296022 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:09:52.058525  296022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:09:53.311509  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 09:09:53.311539  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 09:09:53.311554  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:53.362357  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 09:09:53.362399  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 09:09:53.421619  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:53.426200  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:09:53.426232  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:09:53.867390  296022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.938928215s)
	I1206 09:09:53.867470  296022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.922413934s)
	I1206 09:09:53.867604  296022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.80904183s)
	I1206 09:09:53.869217  296022 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-718157 addons enable metrics-server
	
	I1206 09:09:53.878801  296022 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:09:53.880378  296022 addons.go:530] duration metric: took 2.128562509s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:09:53.921778  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:53.926604  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:09:53.926626  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:09:54.422145  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:54.427281  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:09:54.427311  296022 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:09:54.922060  296022 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:09:54.926390  296022 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:09:54.927406  296022 api_server.go:141] control plane version: v1.35.0-beta.0
	I1206 09:09:54.927428  296022 api_server.go:131] duration metric: took 3.0063594s to wait for apiserver health ...
	I1206 09:09:54.927436  296022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:09:54.931163  296022 system_pods.go:59] 8 kube-system pods found
	I1206 09:09:54.931191  296022 system_pods.go:61] "coredns-7d764666f9-4xnvs" [56b811f4-2c33-47ae-a18e-91bf00c91dda] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:54.931203  296022 system_pods.go:61] "etcd-newest-cni-718157" [3d942387-01d6-4fd9-a474-258befcbde87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:09:54.931213  296022 system_pods.go:61] "kindnet-6q6w2" [740bbe6b-e50c-4cf4-b593-5f871820515c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:09:54.931224  296022 system_pods.go:61] "kube-apiserver-newest-cni-718157" [488d8c89-4121-4c74-9433-c14123aa9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:09:54.931233  296022 system_pods.go:61] "kube-controller-manager-newest-cni-718157" [f5fd9d31-9322-4da5-8e82-8e20ae26ca00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:09:54.931240  296022 system_pods.go:61] "kube-proxy-46zxv" [13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:09:54.931252  296022 system_pods.go:61] "kube-scheduler-newest-cni-718157" [a256efa5-856a-4103-b3b9-397143dc1894] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:09:54.931259  296022 system_pods.go:61] "storage-provisioner" [72d40874-81fd-421f-95e4-7f8b2380f340] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1206 09:09:54.931275  296022 system_pods.go:74] duration metric: took 3.832886ms to wait for pod list to return data ...
	I1206 09:09:54.931284  296022 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:09:54.933640  296022 default_sa.go:45] found service account: "default"
	I1206 09:09:54.933660  296022 default_sa.go:55] duration metric: took 2.368927ms for default service account to be created ...
	I1206 09:09:54.933672  296022 kubeadm.go:587] duration metric: took 3.181913749s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:09:54.933685  296022 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:09:54.935802  296022 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:09:54.935821  296022 node_conditions.go:123] node cpu capacity is 8
	I1206 09:09:54.935836  296022 node_conditions.go:105] duration metric: took 2.143409ms to run NodePressure ...
	I1206 09:09:54.935847  296022 start.go:242] waiting for startup goroutines ...
	I1206 09:09:54.935853  296022 start.go:247] waiting for cluster config update ...
	I1206 09:09:54.935865  296022 start.go:256] writing updated cluster config ...
	I1206 09:09:54.936132  296022 ssh_runner.go:195] Run: rm -f paused
	I1206 09:09:54.985331  296022 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1206 09:09:54.987519  296022 out.go:179] * Done! kubectl is now configured to use "newest-cni-718157" cluster and "default" namespace by default
	W1206 09:09:51.983502  289573 node_ready.go:57] node "auto-646473" has "Ready":"False" status (will retry)
	W1206 09:09:54.482670  289573 node_ready.go:57] node "auto-646473" has "Ready":"False" status (will retry)
	W1206 09:09:53.114206  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:55.114560  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:09:57.114607  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.35942046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.361838935Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a3c7df49-110c-4a9a-85d8-1068ac835dd2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.364107443Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.364667648Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f5b32157-d19d-414c-af7c-522ff98738e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.36491282Z" level=info msg="Ran pod sandbox 2f5ba92dd965f359d6e69887b9bce0bfaf741d96d2e4c5f31f1c7271b75204d5 with infra container: kube-system/kube-proxy-46zxv/POD" id=a3c7df49-110c-4a9a-85d8-1068ac835dd2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.366197166Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=77ef5b97-af3f-4dbd-8aad-233307afc036 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.366324202Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.367229077Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5edf4081-feb8-45a5-b72a-bb4b8f19719c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.367239305Z" level=info msg="Ran pod sandbox 5a5f819d937dc300f1768a956dc0dff3b684c19ae9bd759134b934429ba3f1ea with infra container: kube-system/kindnet-6q6w2/POD" id=f5b32157-d19d-414c-af7c-522ff98738e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.368154943Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ff2bffaa-a25f-4aa1-bc3c-2071f2d20080 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.368313037Z" level=info msg="Creating container: kube-system/kube-proxy-46zxv/kube-proxy" id=c028d177-b818-4538-a694-1369abcbc575 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.368425396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.369017838Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6124d1d5-0f28-486b-be08-7405cdf1276a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.372210513Z" level=info msg="Creating container: kube-system/kindnet-6q6w2/kindnet-cni" id=01c1395e-54e2-4a37-ba67-9211bb1f7d3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.37247885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.373863303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.374633658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.376973646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.377668704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.401684113Z" level=info msg="Created container c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb: kube-system/kindnet-6q6w2/kindnet-cni" id=01c1395e-54e2-4a37-ba67-9211bb1f7d3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.402406098Z" level=info msg="Starting container: c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb" id=69f0b9f6-161e-4da7-a402-d31c35591bd9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.404981027Z" level=info msg="Started container" PID=1050 containerID=c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb description=kube-system/kindnet-6q6w2/kindnet-cni id=69f0b9f6-161e-4da7-a402-d31c35591bd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a5f819d937dc300f1768a956dc0dff3b684c19ae9bd759134b934429ba3f1ea
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.405448924Z" level=info msg="Created container fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd: kube-system/kube-proxy-46zxv/kube-proxy" id=c028d177-b818-4538-a694-1369abcbc575 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.405934201Z" level=info msg="Starting container: fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd" id=efbae18f-9f3a-4067-a7ca-b39fb8bd6a36 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:09:54 newest-cni-718157 crio[522]: time="2025-12-06T09:09:54.409296764Z" level=info msg="Started container" PID=1049 containerID=fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd description=kube-system/kube-proxy-46zxv/kube-proxy id=efbae18f-9f3a-4067-a7ca-b39fb8bd6a36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f5ba92dd965f359d6e69887b9bce0bfaf741d96d2e4c5f31f1c7271b75204d5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c27d3065b3fdb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   5a5f819d937dc       kindnet-6q6w2                               kube-system
	fa498972878bf       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   2f5ba92dd965f       kube-proxy-46zxv                            kube-system
	edfd807dab22f       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   4faee54531bb0       kube-scheduler-newest-cni-718157            kube-system
	34fd2947eabbb       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   071cdc19481eb       kube-controller-manager-newest-cni-718157   kube-system
	253464fe728f5       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   42f9de2cfda1d       kube-apiserver-newest-cni-718157            kube-system
	5b265fccde521       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   ea9c259cc0a59       etcd-newest-cni-718157                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718157
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-718157
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=newest-cni-718157
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718157
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:09:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 06 Dec 2025 09:09:53 +0000   Sat, 06 Dec 2025 09:09:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-718157
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ec2caef7-c7e1-47e9-abcb-e0e0655dbe92
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718157                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-6q6w2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-718157             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-718157    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-46zxv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-718157             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  23s   node-controller  Node newest-cni-718157 event: Registered Node newest-cni-718157 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-718157 event: Registered Node newest-cni-718157 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [5b265fccde521d8e110f888965af6a403842f8d38dd2f53f9725ac68df3a988f] <==
	{"level":"warn","ts":"2025-12-06T09:09:52.694692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.701353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.708380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.730661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.741495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.747841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.754281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.760457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.767563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.773697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.787116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.807814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.814173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.820663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.826862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.833213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.839466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.845620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.851624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.857817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.864519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.877090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.883718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.889928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:52.896038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:10:00 up 52 min,  0 user,  load average: 5.02, 3.00, 1.99
	Linux newest-cni-718157 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c27d3065b3fdb5dc4bcb3b69c2d99e0ffdc1d479cdd5fd39ef00255d9edf8adb] <==
	I1206 09:09:54.628565       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:09:54.628847       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1206 09:09:54.628984       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:09:54.629020       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:09:54.629051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:09:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:09:54.833053       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:09:54.849795       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:09:54.849860       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:09:54.928824       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:09:55.249975       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:09:55.250020       1 metrics.go:72] Registering metrics
	I1206 09:09:55.250083       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [253464fe728f5df3928faa0de25e3ac14233c44ad60ec693fc2e9cd6e668046f] <==
	I1206 09:09:53.406235       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:53.406149       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:09:53.406255       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:09:53.406279       1 aggregator.go:187] initial CRD sync complete...
	I1206 09:09:53.406287       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:09:53.406291       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:09:53.406297       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:09:53.406156       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:09:53.410221       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:53.410241       1 policy_source.go:248] refreshing policies
	E1206 09:09:53.412963       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:09:53.413855       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:09:53.448911       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:09:53.677310       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:09:53.703310       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:09:53.721581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:09:53.728538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:09:53.737648       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:09:53.769807       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.172.17"}
	I1206 09:09:53.781169       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.249.223"}
	I1206 09:09:54.309137       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:09:56.991140       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:09:57.042265       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:09:57.140578       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:09:57.241588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [34fd2947eabbba8509f54f214b029f7c2bd39db41b9053eaa3e3ddd4162a81a1] <==
	I1206 09:09:56.544234       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.544292       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.544863       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546244       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546295       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546307       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546311       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546262       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546276       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546282       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546273       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546288       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546377       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:09:56.546288       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546307       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546275       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546446       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-718157"
	I1206 09:09:56.546262       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.546542       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1206 09:09:56.549814       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.553846       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:56.646259       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:56.646283       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:09:56.646290       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:09:56.653955       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [fa498972878bf0899014e84cc4f630edfbef094d546cfef03d19d765a5b2acdd] <==
	I1206 09:09:54.448460       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:09:54.522800       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:54.623266       1 shared_informer.go:377] "Caches are synced"
	I1206 09:09:54.623296       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1206 09:09:54.623375       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:09:54.641822       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:09:54.641885       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:09:54.648005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:09:54.648352       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:09:54.648373       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:54.650345       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:09:54.650369       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:09:54.650414       1 config.go:200] "Starting service config controller"
	I1206 09:09:54.650421       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:09:54.651227       1 config.go:309] "Starting node config controller"
	I1206 09:09:54.651251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:09:54.651262       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:09:54.651280       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:09:54.651269       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:09:54.750576       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:09:54.750598       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:09:54.751919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [edfd807dab22fa6213edfb81545f201f0a403a8832df765a9ff61a63fff61f00] <==
	I1206 09:09:51.824916       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:09:53.337840       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:09:53.337905       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:09:53.337918       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:09:53.337928       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:09:53.365530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:09:53.365564       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:53.367714       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:09:53.367747       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:09:53.367847       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:09:53.368254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:09:53.468605       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.470480     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.476420     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718157\" already exists" pod="kube-system/kube-apiserver-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.476456     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.482042     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718157\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.482076     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.487660     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-718157\" already exists" pod="kube-system/kube-scheduler-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: I1206 09:09:53.713548     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.719529     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718157\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718157"
	Dec 06 09:09:53 newest-cni-718157 kubelet[670]: E1206 09:09:53.719654     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-718157" containerName="kube-controller-manager"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.049322     670 apiserver.go:52] "Watching apiserver"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.056396     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094323     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-718157" containerName="kube-apiserver"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094518     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-718157" containerName="kube-controller-manager"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094671     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: E1206 09:09:54.094795     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-718157" containerName="etcd"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146692     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-xtables-lock\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146733     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-cni-cfg\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146894     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-xtables-lock\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.146940     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690-lib-modules\") pod \"kube-proxy-46zxv\" (UID: \"13b5f2d1-b3d2-43a4-ab51-1ea0c1b7f690\") " pod="kube-system/kube-proxy-46zxv"
	Dec 06 09:09:54 newest-cni-718157 kubelet[670]: I1206 09:09:54.147046     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/740bbe6b-e50c-4cf4-b593-5f871820515c-lib-modules\") pod \"kindnet-6q6w2\" (UID: \"740bbe6b-e50c-4cf4-b593-5f871820515c\") " pod="kube-system/kindnet-6q6w2"
	Dec 06 09:09:55 newest-cni-718157 kubelet[670]: E1206 09:09:55.166266     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-718157" containerName="kube-scheduler"
	Dec 06 09:09:55 newest-cni-718157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:09:55 newest-cni-718157 kubelet[670]: I1206 09:09:55.951883     670 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:09:55 newest-cni-718157 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:09:55 newest-cni-718157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718157 -n newest-cni-718157
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718157 -n newest-cni-718157: exit status 2 (320.279242ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718157 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb: exit status 1 (63.273905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4xnvs" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6r77h" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-pm4bb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718157 describe pod coredns-7d764666f9-4xnvs storage-provisioner dashboard-metrics-scraper-867fb5f87b-6r77h kubernetes-dashboard-b84665fb8-pm4bb: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-213278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-213278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.746208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:10:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-213278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-213278 describe deploy/metrics-server -n kube-system: exit status 1 (65.03301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-213278 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-213278
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-213278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf",
	        "Created": "2025-12-06T09:09:12.980409254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:09:13.019504613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/hosts",
	        "LogPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf-json.log",
	        "Name": "/default-k8s-diff-port-213278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-213278:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-213278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf",
	                "LowerDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-213278",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-213278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-213278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-213278",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "73ee89104aa76b0427eaf53abf89de007a72a89b1c09f71c686f03b21430376e",
	            "SandboxKey": "/var/run/docker/netns/73ee89104aa7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-213278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57bdd7b529719bb4288cd247e9e4bc85dc55500f3378aa22459233ae5de1bd98",
	                    "EndpointID": "398aaa75ef92b5fc8e583a34338d7e972f3f4c0b152064cb6ba13d2901d84b6d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "56:9c:61:41:51:bf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-213278",
	                        "7ed3f206e5bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-213278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-213278 logs -n 25: (1.134127959s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-646473 sudo cat /etc/hosts                                                                                                                 │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo cat /etc/resolv.conf                                                                                                           │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo crictl pods                                                                                                                    │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo crictl ps --all                                                                                                                │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo ip a s                                                                                                                         │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo ip r s                                                                                                                         │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo iptables-save                                                                                                                  │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo iptables -t nat -L -n -v                                                                                                       │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo systemctl status kubelet --all --full --no-pager                                                                               │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo systemctl cat kubelet --no-pager                                                                                               │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo cat /etc/kubernetes/kubelet.conf                                                                                               │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo cat /var/lib/kubelet/config.yaml                                                                                               │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo systemctl status docker --all --full --no-pager                                                                                │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ ssh     │ -p auto-646473 sudo systemctl cat docker --no-pager                                                                                                │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo cat /etc/docker/daemon.json                                                                                                    │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ ssh     │ -p auto-646473 sudo docker system info                                                                                                             │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ ssh     │ -p auto-646473 sudo systemctl status cri-docker --all --full --no-pager                                                                            │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ ssh     │ -p auto-646473 sudo systemctl cat cri-docker --no-pager                                                                                            │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                       │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ ssh     │ -p auto-646473 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                 │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-213278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ ssh     │ -p auto-646473 sudo cri-dockerd --version                                                                                                          │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ ssh     │ -p auto-646473 sudo systemctl status containerd --all --full --no-pager                                                                            │ auto-646473                  │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:06.847193  302585 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:06.847308  302585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:06.847323  302585 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:06.847330  302585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:06.847611  302585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:10:06.848231  302585 out.go:368] Setting JSON to false
	I1206 09:10:06.849696  302585 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3158,"bootTime":1765009049,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:06.849777  302585 start.go:143] virtualization: kvm guest
	I1206 09:10:06.853041  302585 out.go:179] * [embed-certs-931091] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:06.857677  302585 notify.go:221] Checking for updates...
	I1206 09:10:06.857693  302585 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:10:06.860052  302585 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:06.861862  302585 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:06.866654  302585 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:10:06.868189  302585 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:06.869598  302585 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:06.871667  302585 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:06.872444  302585 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:06.903724  302585 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:10:06.903836  302585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:10:06.973741  302585 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-06 09:10:06.961922627 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:10:06.973883  302585 docker.go:319] overlay module found
	I1206 09:10:06.978266  302585 out.go:179] * Using the docker driver based on existing profile
	I1206 09:10:06.979858  302585 start.go:309] selected driver: docker
	I1206 09:10:06.979875  302585 start.go:927] validating driver "docker" against &{Name:embed-certs-931091 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:06.980055  302585 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:06.980848  302585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:10:07.053068  302585 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-06 09:10:07.04136004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:10:07.053577  302585 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:07.053622  302585 cni.go:84] Creating CNI manager for ""
	I1206 09:10:07.053691  302585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:10:07.053743  302585 start.go:353] cluster config:
	{Name:embed-certs-931091 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:07.055386  302585 out.go:179] * Starting "embed-certs-931091" primary control-plane node in "embed-certs-931091" cluster
	I1206 09:10:07.056420  302585 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:10:07.057938  302585 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:10:07.058911  302585 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:07.058949  302585 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:10:07.058959  302585 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:07.059009  302585 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:10:07.059084  302585 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:07.059102  302585 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:10:07.059236  302585 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/config.json ...
	I1206 09:10:07.084161  302585 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:10:07.084184  302585 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:10:07.084201  302585 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:10:07.084236  302585 start.go:360] acquireMachinesLock for embed-certs-931091: {Name:mk2b48616d78e17e3cbabb1af18ccebe23a4da75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:07.084302  302585 start.go:364] duration metric: took 43.902µs to acquireMachinesLock for "embed-certs-931091"
	I1206 09:10:07.084323  302585 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:10:07.084329  302585 fix.go:54] fixHost starting: 
	I1206 09:10:07.084591  302585 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:10:07.104498  302585 fix.go:112] recreateIfNeeded on embed-certs-931091: state=Stopped err=<nil>
	W1206 09:10:07.104522  302585 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 09:10:04.114593  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:10:06.616215  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:10:04.055945  301707 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:10:04.056175  301707 start.go:159] libmachine.API.Create for "kindnet-646473" (driver="docker")
	I1206 09:10:04.056203  301707 client.go:173] LocalClient.Create starting
	I1206 09:10:04.056284  301707 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:10:04.056311  301707 main.go:143] libmachine: Decoding PEM data...
	I1206 09:10:04.056327  301707 main.go:143] libmachine: Parsing certificate...
	I1206 09:10:04.056381  301707 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:10:04.056399  301707 main.go:143] libmachine: Decoding PEM data...
	I1206 09:10:04.056411  301707 main.go:143] libmachine: Parsing certificate...
	I1206 09:10:04.056698  301707 cli_runner.go:164] Run: docker network inspect kindnet-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:10:04.074579  301707 cli_runner.go:211] docker network inspect kindnet-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:10:04.074655  301707 network_create.go:284] running [docker network inspect kindnet-646473] to gather additional debugging logs...
	I1206 09:10:04.074670  301707 cli_runner.go:164] Run: docker network inspect kindnet-646473
	W1206 09:10:04.093909  301707 cli_runner.go:211] docker network inspect kindnet-646473 returned with exit code 1
	I1206 09:10:04.093966  301707 network_create.go:287] error running [docker network inspect kindnet-646473]: docker network inspect kindnet-646473: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-646473 not found
	I1206 09:10:04.094028  301707 network_create.go:289] output of [docker network inspect kindnet-646473]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-646473 not found
	
	** /stderr **
	I1206 09:10:04.094195  301707 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:10:04.112536  301707 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:10:04.113283  301707 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:10:04.114155  301707 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:10:04.114878  301707 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-14acb07f6393 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:59:1c:30:2c:0f} reservation:<nil>}
	I1206 09:10:04.115443  301707 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-57bdd7b52971 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:54:0b:60:1c:a3} reservation:<nil>}
	I1206 09:10:04.116251  301707 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e95d30}
	I1206 09:10:04.116282  301707 network_create.go:124] attempt to create docker network kindnet-646473 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:10:04.116336  301707 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-646473 kindnet-646473
	I1206 09:10:04.168494  301707 network_create.go:108] docker network kindnet-646473 192.168.94.0/24 created
	I1206 09:10:04.168534  301707 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-646473" container
	I1206 09:10:04.168613  301707 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:10:04.187982  301707 cli_runner.go:164] Run: docker volume create kindnet-646473 --label name.minikube.sigs.k8s.io=kindnet-646473 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:10:04.207902  301707 oci.go:103] Successfully created a docker volume kindnet-646473
	I1206 09:10:04.207983  301707 cli_runner.go:164] Run: docker run --rm --name kindnet-646473-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646473 --entrypoint /usr/bin/test -v kindnet-646473:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:10:04.649306  301707 oci.go:107] Successfully prepared a docker volume kindnet-646473
	I1206 09:10:04.649366  301707 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:04.649381  301707 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:10:04.649575  301707 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:10:08.647728  301707 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.998080106s)
	I1206 09:10:08.647766  301707 kic.go:203] duration metric: took 3.998380698s to extract preloaded images to volume ...
	W1206 09:10:08.647869  301707 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:10:08.647910  301707 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:10:08.647955  301707 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:10:08.714821  301707 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-646473 --name kindnet-646473 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646473 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-646473 --network kindnet-646473 --ip 192.168.94.2 --volume kindnet-646473:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:10:07.106788  302585 out.go:252] * Restarting existing docker container for "embed-certs-931091" ...
	I1206 09:10:07.106868  302585 cli_runner.go:164] Run: docker start embed-certs-931091
	I1206 09:10:07.383577  302585 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:10:07.406401  302585 kic.go:430] container "embed-certs-931091" state is running.
	I1206 09:10:07.406923  302585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931091
	I1206 09:10:07.430613  302585 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/config.json ...
	I1206 09:10:07.449245  302585 machine.go:94] provisionDockerMachine start ...
	I1206 09:10:07.449390  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:07.469128  302585 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:07.469359  302585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 09:10:07.469376  302585 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:10:07.470049  302585 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40856->127.0.0.1:33103: read: connection reset by peer
	I1206 09:10:10.600016  302585 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-931091
	
	I1206 09:10:10.600040  302585 ubuntu.go:182] provisioning hostname "embed-certs-931091"
	I1206 09:10:10.600090  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:10.618883  302585 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:10.619196  302585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 09:10:10.619214  302585 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-931091 && echo "embed-certs-931091" | sudo tee /etc/hostname
	I1206 09:10:10.757248  302585 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-931091
	
	I1206 09:10:10.757344  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:10.776366  302585 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:10.776632  302585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 09:10:10.776658  302585 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-931091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-931091/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-931091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:10:10.904626  302585 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:10:10.904663  302585 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:10:10.904697  302585 ubuntu.go:190] setting up certificates
	I1206 09:10:10.904705  302585 provision.go:84] configureAuth start
	I1206 09:10:10.904771  302585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931091
	I1206 09:10:10.922691  302585 provision.go:143] copyHostCerts
	I1206 09:10:10.922750  302585 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:10:10.922758  302585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:10:10.922832  302585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:10:10.922926  302585 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:10:10.922937  302585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:10:10.922965  302585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:10:10.923076  302585 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:10:10.923089  302585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:10:10.923127  302585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:10:10.923177  302585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.embed-certs-931091 san=[127.0.0.1 192.168.103.2 embed-certs-931091 localhost minikube]
	I1206 09:10:11.091544  302585 provision.go:177] copyRemoteCerts
	I1206 09:10:11.091598  302585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:10:11.091633  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:11.111301  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:11.205537  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:10:11.223105  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 09:10:11.240399  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:10:11.257697  302585 provision.go:87] duration metric: took 352.979882ms to configureAuth
	I1206 09:10:11.257739  302585 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:10:11.257891  302585 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:11.257981  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:11.276068  302585 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:11.276342  302585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 09:10:11.276363  302585 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:10:11.589774  302585 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:10:11.589795  302585 machine.go:97] duration metric: took 4.140522366s to provisionDockerMachine
	I1206 09:10:11.589806  302585 start.go:293] postStartSetup for "embed-certs-931091" (driver="docker")
	I1206 09:10:11.589815  302585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:10:11.589867  302585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:10:11.589926  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:11.609643  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:11.703410  302585 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:10:11.707110  302585 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:10:11.707149  302585 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:10:11.707161  302585 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:10:11.707223  302585 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:10:11.707321  302585 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:10:11.707446  302585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:10:11.714931  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:11.732277  302585 start.go:296] duration metric: took 142.457044ms for postStartSetup
	I1206 09:10:11.732356  302585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:10:11.732422  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:11.750429  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:11.841052  302585 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:10:11.845574  302585 fix.go:56] duration metric: took 4.761242343s for fixHost
	I1206 09:10:11.845596  302585 start.go:83] releasing machines lock for "embed-certs-931091", held for 4.761282131s
	I1206 09:10:11.845647  302585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931091
	W1206 09:10:09.119252  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:10:11.615039  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:10:11.863499  302585 ssh_runner.go:195] Run: cat /version.json
	I1206 09:10:11.863539  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:11.863546  302585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:10:11.863625  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:11.881942  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:11.882375  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:11.972490  302585 ssh_runner.go:195] Run: systemctl --version
	I1206 09:10:12.028976  302585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:10:12.064280  302585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:10:12.069129  302585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:10:12.069200  302585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:10:12.077127  302585 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:10:12.077153  302585 start.go:496] detecting cgroup driver to use...
	I1206 09:10:12.077182  302585 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:10:12.077216  302585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:10:12.091379  302585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:10:12.104278  302585 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:10:12.104334  302585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:10:12.119577  302585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:10:12.132780  302585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:10:12.213032  302585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:10:12.297680  302585 docker.go:234] disabling docker service ...
	I1206 09:10:12.297747  302585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:10:12.311904  302585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:10:12.325244  302585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:10:12.413717  302585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:10:12.496693  302585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:10:12.509158  302585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:10:12.523603  302585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:10:12.523673  302585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.533797  302585 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:10:12.533855  302585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.543476  302585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.552853  302585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.561567  302585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:10:12.570089  302585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.579265  302585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.587907  302585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:12.597154  302585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:10:12.604775  302585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:10:12.612245  302585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:12.695853  302585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:10:12.844268  302585 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:10:12.844342  302585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:10:12.848516  302585 start.go:564] Will wait 60s for crictl version
	I1206 09:10:12.848565  302585 ssh_runner.go:195] Run: which crictl
	I1206 09:10:12.852165  302585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:10:12.877722  302585 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:10:12.877807  302585 ssh_runner.go:195] Run: crio --version
	I1206 09:10:12.906984  302585 ssh_runner.go:195] Run: crio --version
	I1206 09:10:12.938607  302585 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:10:08.982974  301707 cli_runner.go:164] Run: docker container inspect kindnet-646473 --format={{.State.Running}}
	I1206 09:10:09.001471  301707 cli_runner.go:164] Run: docker container inspect kindnet-646473 --format={{.State.Status}}
	I1206 09:10:09.020035  301707 cli_runner.go:164] Run: docker exec kindnet-646473 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:10:09.074472  301707 oci.go:144] the created container "kindnet-646473" has a running status.
	I1206 09:10:09.074500  301707 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa...
	I1206 09:10:09.095615  301707 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:10:09.127981  301707 cli_runner.go:164] Run: docker container inspect kindnet-646473 --format={{.State.Status}}
	I1206 09:10:09.147312  301707 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:10:09.147333  301707 kic_runner.go:114] Args: [docker exec --privileged kindnet-646473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:10:09.189281  301707 cli_runner.go:164] Run: docker container inspect kindnet-646473 --format={{.State.Status}}
	I1206 09:10:09.210304  301707 machine.go:94] provisionDockerMachine start ...
	I1206 09:10:09.210405  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:09.230373  301707 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:09.230699  301707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1206 09:10:09.230715  301707 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:10:09.231480  301707 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36880->127.0.0.1:33108: read: connection reset by peer
	I1206 09:10:12.366281  301707 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646473
	
	I1206 09:10:12.366311  301707 ubuntu.go:182] provisioning hostname "kindnet-646473"
	I1206 09:10:12.366420  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:12.387302  301707 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:12.387534  301707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1206 09:10:12.387546  301707 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-646473 && echo "kindnet-646473" | sudo tee /etc/hostname
	I1206 09:10:12.526937  301707 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646473
	
	I1206 09:10:12.527101  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:12.546582  301707 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:12.546859  301707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1206 09:10:12.546883  301707 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-646473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-646473/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-646473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:10:12.675662  301707 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:10:12.675687  301707 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:10:12.675703  301707 ubuntu.go:190] setting up certificates
	I1206 09:10:12.675713  301707 provision.go:84] configureAuth start
	I1206 09:10:12.675768  301707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646473
	I1206 09:10:12.696417  301707 provision.go:143] copyHostCerts
	I1206 09:10:12.696491  301707 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:10:12.696505  301707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:10:12.696589  301707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:10:12.696706  301707 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:10:12.696715  301707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:10:12.696747  301707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:10:12.696841  301707 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:10:12.696853  301707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:10:12.696891  301707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:10:12.697010  301707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.kindnet-646473 san=[127.0.0.1 192.168.94.2 kindnet-646473 localhost minikube]
	I1206 09:10:12.767812  301707 provision.go:177] copyRemoteCerts
	I1206 09:10:12.767877  301707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:10:12.767930  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:12.788780  301707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa Username:docker}
	I1206 09:10:12.883898  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:10:12.904415  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1206 09:10:12.923142  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:10:12.941882  301707 provision.go:87] duration metric: took 266.157525ms to configureAuth
	I1206 09:10:12.941906  301707 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:10:12.942104  301707 config.go:182] Loaded profile config "kindnet-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:12.942231  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:12.961288  301707 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:12.961583  301707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1206 09:10:12.961607  301707 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:10:13.250822  301707 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:10:13.250869  301707 machine.go:97] duration metric: took 4.040541775s to provisionDockerMachine
	I1206 09:10:13.250884  301707 client.go:176] duration metric: took 9.194674861s to LocalClient.Create
	I1206 09:10:13.250909  301707 start.go:167] duration metric: took 9.19473309s to libmachine.API.Create "kindnet-646473"
	I1206 09:10:13.250923  301707 start.go:293] postStartSetup for "kindnet-646473" (driver="docker")
	I1206 09:10:13.250936  301707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:10:13.251042  301707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:10:13.251091  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:13.271145  301707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa Username:docker}
	I1206 09:10:13.372387  301707 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:10:13.376340  301707 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:10:13.376372  301707 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:10:13.376382  301707 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:10:13.376423  301707 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:10:13.376501  301707 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:10:13.376591  301707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:10:13.384425  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:13.405094  301707 start.go:296] duration metric: took 154.155756ms for postStartSetup
	I1206 09:10:13.405510  301707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646473
	I1206 09:10:13.425385  301707 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/config.json ...
	I1206 09:10:13.425664  301707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:10:13.425711  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:13.445144  301707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa Username:docker}
	I1206 09:10:13.537909  301707 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:10:13.542893  301707 start.go:128] duration metric: took 9.489022527s to createHost
	I1206 09:10:13.542924  301707 start.go:83] releasing machines lock for "kindnet-646473", held for 9.489174611s
	I1206 09:10:13.543017  301707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646473
	I1206 09:10:13.561099  301707 ssh_runner.go:195] Run: cat /version.json
	I1206 09:10:13.561154  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:13.561188  301707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:10:13.561261  301707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646473
	I1206 09:10:13.580826  301707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa Username:docker}
	I1206 09:10:13.581319  301707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/kindnet-646473/id_rsa Username:docker}
	I1206 09:10:13.728715  301707 ssh_runner.go:195] Run: systemctl --version
	I1206 09:10:13.735531  301707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:10:13.772868  301707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:10:13.779142  301707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:10:13.779215  301707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:10:13.811189  301707 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:10:13.811221  301707 start.go:496] detecting cgroup driver to use...
	I1206 09:10:13.811255  301707 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:10:13.811305  301707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:10:13.832940  301707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:10:12.939889  302585 cli_runner.go:164] Run: docker network inspect embed-certs-931091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:10:12.959296  302585 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:10:12.963744  302585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:12.974304  302585 kubeadm.go:884] updating cluster {Name:embed-certs-931091 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:10:12.974435  302585 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:12.974490  302585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:13.007264  302585 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:13.007288  302585 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:10:13.007334  302585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:13.037127  302585 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:13.037146  302585 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:10:13.037153  302585 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1206 09:10:13.037252  302585 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-931091 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:10:13.037311  302585 ssh_runner.go:195] Run: crio config
	I1206 09:10:13.086282  302585 cni.go:84] Creating CNI manager for ""
	I1206 09:10:13.086310  302585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:10:13.086336  302585 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:10:13.086498  302585 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-931091 NodeName:embed-certs-931091 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:10:13.086957  302585 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-931091"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:10:13.087040  302585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:10:13.095511  302585 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:10:13.095575  302585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:10:13.103291  302585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1206 09:10:13.116540  302585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:10:13.129249  302585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1206 09:10:13.143024  302585 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:10:13.146696  302585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:13.156701  302585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:13.239677  302585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:13.264848  302585 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091 for IP: 192.168.103.2
	I1206 09:10:13.264870  302585 certs.go:195] generating shared ca certs ...
	I1206 09:10:13.264886  302585 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:13.265098  302585 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:10:13.265168  302585 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:10:13.265182  302585 certs.go:257] generating profile certs ...
	I1206 09:10:13.265295  302585 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/client.key
	I1206 09:10:13.265377  302585 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key.387b23fa
	I1206 09:10:13.265433  302585 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.key
	I1206 09:10:13.265572  302585 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:10:13.265617  302585 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:10:13.265630  302585 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:10:13.265664  302585 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:10:13.265703  302585 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:10:13.265738  302585 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:10:13.265796  302585 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:13.266510  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:10:13.286648  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:10:13.306852  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:10:13.328228  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:10:13.353265  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1206 09:10:13.372408  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:10:13.390087  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:10:13.407911  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/embed-certs-931091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:10:13.428140  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:10:13.447270  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:10:13.466866  302585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:10:13.484534  302585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:10:13.496831  302585 ssh_runner.go:195] Run: openssl version
	I1206 09:10:13.502691  302585 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:13.510034  302585 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:10:13.517498  302585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:13.521187  302585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:13.521232  302585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:13.559277  302585 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:10:13.568787  302585 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:10:13.577690  302585 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:10:13.586168  302585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:10:13.590242  302585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:10:13.590295  302585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:10:13.627615  302585 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:10:13.635591  302585 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:10:13.643501  302585 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:10:13.651209  302585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:10:13.654953  302585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:10:13.655030  302585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:10:13.690337  302585 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:10:13.698161  302585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:10:13.701966  302585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:10:13.738400  302585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:10:13.781243  302585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:10:13.831179  302585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:10:13.884505  302585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:10:13.951441  302585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:10:13.992156  302585 kubeadm.go:401] StartCluster: {Name:embed-certs-931091 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-931091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:13.992233  302585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:10:13.992293  302585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:10:14.023049  302585 cri.go:89] found id: "a846117bc72b733c4769ce32f31c23b76ae89d79e9fd9cf10be97e49bc2b4a74"
	I1206 09:10:14.023072  302585 cri.go:89] found id: "04174e56b26bf5e8534176ff57e230be3ed770891a615c3b75077b0468d06685"
	I1206 09:10:14.023121  302585 cri.go:89] found id: "9a3dc4e5add4a40d23fc1d867a32c27494d2f0aa5fe72049c03da86c84d3090b"
	I1206 09:10:14.023128  302585 cri.go:89] found id: "893b7522c648e625ee7cedf9142d4b1472b197d9b456f9a6939ff5eafca0b904"
	I1206 09:10:14.023134  302585 cri.go:89] found id: ""
	I1206 09:10:14.023193  302585 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:10:14.038454  302585 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:10:14Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:10:14.038518  302585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:10:14.047795  302585 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:10:14.047815  302585 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:10:14.047860  302585 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:10:14.057394  302585 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:10:14.058318  302585 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-931091" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:14.058762  302585 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-5617/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-931091" cluster setting kubeconfig missing "embed-certs-931091" context setting]
	I1206 09:10:14.059406  302585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:14.061257  302585 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:10:14.070186  302585 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1206 09:10:14.070216  302585 kubeadm.go:602] duration metric: took 22.396096ms to restartPrimaryControlPlane
	I1206 09:10:14.070226  302585 kubeadm.go:403] duration metric: took 78.082071ms to StartCluster
	I1206 09:10:14.070243  302585 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:14.070310  302585 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:14.072202  302585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:14.072845  302585 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:10:14.073087  302585 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:14.073143  302585 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:10:14.073217  302585 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-931091"
	I1206 09:10:14.073233  302585 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-931091"
	W1206 09:10:14.073241  302585 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:10:14.073267  302585 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:10:14.073743  302585 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:10:14.074223  302585 addons.go:70] Setting dashboard=true in profile "embed-certs-931091"
	I1206 09:10:14.074252  302585 addons.go:239] Setting addon dashboard=true in "embed-certs-931091"
	W1206 09:10:14.074261  302585 addons.go:248] addon dashboard should already be in state true
	I1206 09:10:14.074316  302585 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:10:14.074797  302585 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:10:14.075071  302585 addons.go:70] Setting default-storageclass=true in profile "embed-certs-931091"
	I1206 09:10:14.075091  302585 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-931091"
	I1206 09:10:14.075374  302585 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:10:14.081220  302585 out.go:179] * Verifying Kubernetes components...
	I1206 09:10:14.083389  302585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:14.109858  302585 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:10:14.109970  302585 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:10:14.110721  302585 addons.go:239] Setting addon default-storageclass=true in "embed-certs-931091"
	W1206 09:10:14.110746  302585 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:10:14.110770  302585 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:10:14.111178  302585 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:14.111199  302585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:10:14.111260  302585 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:10:14.111266  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:14.112369  302585 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:10:13.850086  301707 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:10:13.850147  301707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:10:13.869902  301707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:10:13.895644  301707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:10:14.024464  301707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:10:14.190786  301707 docker.go:234] disabling docker service ...
	I1206 09:10:14.190854  301707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:10:14.216255  301707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:10:14.232695  301707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:10:14.376241  301707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:10:14.499911  301707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:10:14.519485  301707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:10:14.539397  301707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:10:14.539458  301707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.553944  301707 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:10:14.554149  301707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.566523  301707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.576444  301707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.586516  301707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:10:14.598646  301707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.609739  301707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.627715  301707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:14.638755  301707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:10:14.649175  301707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:10:14.658912  301707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:14.778057  301707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:10:14.951884  301707 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:10:14.951958  301707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:10:14.957606  301707 start.go:564] Will wait 60s for crictl version
	I1206 09:10:14.957671  301707 ssh_runner.go:195] Run: which crictl
	I1206 09:10:14.962739  301707 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:10:14.997704  301707 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:10:14.997785  301707 ssh_runner.go:195] Run: crio --version
	I1206 09:10:15.034240  301707 ssh_runner.go:195] Run: crio --version
	I1206 09:10:15.069364  301707 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:10:14.117086  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:10:14.117104  302585 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:10:14.117175  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:14.144127  302585 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:14.144149  302585 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:10:14.144112  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:14.144208  302585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:10:14.150311  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:14.180382  302585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:10:14.256229  302585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:14.269778  302585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:14.276955  302585 node_ready.go:35] waiting up to 6m0s for node "embed-certs-931091" to be "Ready" ...
	I1206 09:10:14.282568  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:10:14.282597  302585 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:10:14.307483  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:10:14.307514  302585 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:10:14.312077  302585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:14.330206  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:10:14.330238  302585 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:10:14.349331  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:10:14.349353  302585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:10:14.376522  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:10:14.376611  302585 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:10:14.397023  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:10:14.397047  302585 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:10:14.417697  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:10:14.417880  302585 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:10:14.445712  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:10:14.445736  302585 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:10:14.466337  302585 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:10:14.466358  302585 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:10:14.485867  302585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:10:15.472795  302585 node_ready.go:49] node "embed-certs-931091" is "Ready"
	I1206 09:10:15.472829  302585 node_ready.go:38] duration metric: took 1.195843076s for node "embed-certs-931091" to be "Ready" ...
	I1206 09:10:15.472845  302585 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:10:15.472911  302585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:10:16.040292  302585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770481663s)
	I1206 09:10:16.040361  302585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.72825054s)
	I1206 09:10:16.040455  302585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.554547483s)
	I1206 09:10:16.040475  302585 api_server.go:72] duration metric: took 1.967595528s to wait for apiserver process to appear ...
	I1206 09:10:16.040485  302585 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:10:16.040499  302585 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:10:16.042552  302585 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-931091 addons enable metrics-server
	
	I1206 09:10:16.047433  302585 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:10:16.047459  302585 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:10:16.053796  302585 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:10:15.071145  301707 cli_runner.go:164] Run: docker network inspect kindnet-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:10:15.099158  301707 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:10:15.103838  301707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:15.115749  301707 kubeadm.go:884] updating cluster {Name:kindnet-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:10:15.115858  301707 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:15.115898  301707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:15.151583  301707 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:15.151605  301707 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:10:15.151661  301707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:15.181193  301707 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:15.181215  301707 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:10:15.181223  301707 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:10:15.181314  301707 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-646473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1206 09:10:15.181392  301707 ssh_runner.go:195] Run: crio config
	I1206 09:10:15.235909  301707 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:10:15.235948  301707 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:10:15.235975  301707 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-646473 NodeName:kindnet-646473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:10:15.236158  301707 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-646473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:10:15.236240  301707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:10:15.246268  301707 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:10:15.246337  301707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:10:15.255105  301707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1206 09:10:15.268657  301707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:10:15.284645  301707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 09:10:15.297766  301707 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:10:15.301828  301707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:15.313099  301707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:15.429792  301707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:15.458907  301707 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473 for IP: 192.168.94.2
	I1206 09:10:15.458927  301707 certs.go:195] generating shared ca certs ...
	I1206 09:10:15.459125  301707 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.459297  301707 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:10:15.459354  301707 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:10:15.459363  301707 certs.go:257] generating profile certs ...
	I1206 09:10:15.459432  301707 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/client.key
	I1206 09:10:15.459445  301707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/client.crt with IP's: []
	I1206 09:10:15.737293  301707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/client.crt ...
	I1206 09:10:15.737322  301707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/client.crt: {Name:mk6dc20806a57ea5da24b3f048022c82ead8f5d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.737519  301707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/client.key ...
	I1206 09:10:15.737535  301707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/client.key: {Name:mk30c3812bffb007e8e0f32b8257cf49c2b7083d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.737649  301707 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.key.666e0d95
	I1206 09:10:15.737673  301707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.crt.666e0d95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:10:15.837862  301707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.crt.666e0d95 ...
	I1206 09:10:15.837889  301707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.crt.666e0d95: {Name:mk86563114db246288f74bc111450fd9b69e4bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.838096  301707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.key.666e0d95 ...
	I1206 09:10:15.838115  301707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.key.666e0d95: {Name:mkdefab721f29bb5e83edcd0982e32ff6d8ede33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.838228  301707 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.crt.666e0d95 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.crt
	I1206 09:10:15.838334  301707 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.key.666e0d95 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.key
	I1206 09:10:15.838414  301707 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.key
	I1206 09:10:15.838435  301707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.crt with IP's: []
	I1206 09:10:15.939597  301707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.crt ...
	I1206 09:10:15.939633  301707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.crt: {Name:mkb38354781d28d5e4f44380e4435a67d35f92ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.939959  301707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.key ...
	I1206 09:10:15.940018  301707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.key: {Name:mk9028f2b7d7638e8d3d1215056d3b2134029f65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:15.940340  301707 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:10:15.940394  301707 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:10:15.940407  301707 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:10:15.940439  301707 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:10:15.940469  301707 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:10:15.940504  301707 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:10:15.940580  301707 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:15.941410  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:10:15.962778  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:10:15.985059  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:10:16.003762  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:10:16.023148  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:10:16.041585  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:10:16.061305  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:10:16.079245  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kindnet-646473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:10:16.097055  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:10:16.116833  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:10:16.134452  301707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:10:16.152000  301707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:10:16.164891  301707 ssh_runner.go:195] Run: openssl version
	I1206 09:10:16.171236  301707 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:10:16.179256  301707 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:10:16.187040  301707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:10:16.190829  301707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:10:16.190888  301707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:10:16.236772  301707 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:10:16.244929  301707 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:10:16.252662  301707 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:16.260053  301707 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:10:16.267772  301707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:16.271663  301707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:16.271715  301707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:16.308069  301707 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:10:16.316871  301707 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:10:16.324303  301707 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:10:16.331677  301707 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:10:16.340696  301707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:10:16.345702  301707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:10:16.345756  301707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:10:16.389618  301707 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:10:16.398886  301707 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:10:16.407639  301707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:10:16.412793  301707 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:10:16.412865  301707 kubeadm.go:401] StartCluster: {Name:kindnet-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:16.412949  301707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:10:16.413026  301707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:10:16.444720  301707 cri.go:89] found id: ""
	I1206 09:10:16.444794  301707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:10:16.454762  301707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:10:16.464484  301707 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:10:16.464555  301707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:10:16.473666  301707 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:10:16.473685  301707 kubeadm.go:158] found existing configuration files:
	
	I1206 09:10:16.473730  301707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:10:16.483525  301707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:10:16.483590  301707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:10:16.492428  301707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:10:16.501979  301707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:10:16.502054  301707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:10:16.512026  301707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:10:16.522015  301707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:10:16.522073  301707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:10:16.531244  301707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:10:16.540329  301707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:10:16.540385  301707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:10:16.549568  301707 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:10:16.596148  301707 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:10:16.596218  301707 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:10:16.620326  301707 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:10:16.620457  301707 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:10:16.620513  301707 kubeadm.go:319] OS: Linux
	I1206 09:10:16.620575  301707 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:10:16.620654  301707 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:10:16.620721  301707 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:10:16.620804  301707 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:10:16.620886  301707 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:10:16.620950  301707 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:10:16.621048  301707 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:10:16.621108  301707 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:10:16.698262  301707 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:10:16.698443  301707 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:10:16.698629  301707 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:10:16.711040  301707 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:10:16.055705  302585 addons.go:530] duration metric: took 1.982563121s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1206 09:10:16.541188  302585 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:10:16.545621  302585 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:10:16.545648  302585 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:10:14.116137  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	W1206 09:10:16.614559  282948 node_ready.go:57] node "default-k8s-diff-port-213278" has "Ready":"False" status (will retry)
	I1206 09:10:16.715639  301707 out.go:252]   - Generating certificates and keys ...
	I1206 09:10:16.715753  301707 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:10:16.715876  301707 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:10:16.900029  301707 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:10:17.062279  301707 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:10:17.327080  301707 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:10:17.710931  301707 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:10:17.840642  301707 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:10:17.840814  301707 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-646473 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:10:18.499433  301707 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:10:18.499657  301707 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-646473 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:10:18.784566  301707 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:10:18.115452  282948 node_ready.go:49] node "default-k8s-diff-port-213278" is "Ready"
	I1206 09:10:18.115486  282948 node_ready.go:38] duration metric: took 41.50430832s for node "default-k8s-diff-port-213278" to be "Ready" ...
	I1206 09:10:18.115503  282948 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:10:18.115569  282948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:10:18.132562  282948 api_server.go:72] duration metric: took 41.860315073s to wait for apiserver process to appear ...
	I1206 09:10:18.132637  282948 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:10:18.132663  282948 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:10:18.138727  282948 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1206 09:10:18.139925  282948 api_server.go:141] control plane version: v1.34.2
	I1206 09:10:18.139951  282948 api_server.go:131] duration metric: took 7.30592ms to wait for apiserver health ...
	I1206 09:10:18.139961  282948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:10:18.144875  282948 system_pods.go:59] 8 kube-system pods found
	I1206 09:10:18.144913  282948 system_pods.go:61] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:10:18.144921  282948 system_pods.go:61] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running
	I1206 09:10:18.144931  282948 system_pods.go:61] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running
	I1206 09:10:18.144937  282948 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running
	I1206 09:10:18.144944  282948 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running
	I1206 09:10:18.144950  282948 system_pods.go:61] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running
	I1206 09:10:18.144956  282948 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running
	I1206 09:10:18.144963  282948 system_pods.go:61] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:10:18.144975  282948 system_pods.go:74] duration metric: took 5.007972ms to wait for pod list to return data ...
	I1206 09:10:18.145013  282948 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:10:18.147595  282948 default_sa.go:45] found service account: "default"
	I1206 09:10:18.147618  282948 default_sa.go:55] duration metric: took 2.594948ms for default service account to be created ...
	I1206 09:10:18.147628  282948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:10:18.191731  282948 system_pods.go:86] 8 kube-system pods found
	I1206 09:10:18.191773  282948 system_pods.go:89] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:10:18.191788  282948 system_pods.go:89] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running
	I1206 09:10:18.191797  282948 system_pods.go:89] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running
	I1206 09:10:18.191823  282948 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running
	I1206 09:10:18.191829  282948 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running
	I1206 09:10:18.191836  282948 system_pods.go:89] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running
	I1206 09:10:18.191844  282948 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running
	I1206 09:10:18.191849  282948 system_pods.go:89] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Running
	I1206 09:10:18.191874  282948 retry.go:31] will retry after 258.66877ms: missing components: kube-dns
	I1206 09:10:18.455149  282948 system_pods.go:86] 8 kube-system pods found
	I1206 09:10:18.455176  282948 system_pods.go:89] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Running
	I1206 09:10:18.455182  282948 system_pods.go:89] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running
	I1206 09:10:18.455186  282948 system_pods.go:89] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running
	I1206 09:10:18.455189  282948 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running
	I1206 09:10:18.455193  282948 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running
	I1206 09:10:18.455196  282948 system_pods.go:89] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running
	I1206 09:10:18.455201  282948 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running
	I1206 09:10:18.455204  282948 system_pods.go:89] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Running
	I1206 09:10:18.455210  282948 system_pods.go:126] duration metric: took 307.576784ms to wait for k8s-apps to be running ...
	I1206 09:10:18.455217  282948 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:10:18.455257  282948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:10:18.469882  282948 system_svc.go:56] duration metric: took 14.655075ms WaitForService to wait for kubelet
	I1206 09:10:18.469914  282948 kubeadm.go:587] duration metric: took 42.197680229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:18.469936  282948 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:10:18.473376  282948 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:10:18.473406  282948 node_conditions.go:123] node cpu capacity is 8
	I1206 09:10:18.473425  282948 node_conditions.go:105] duration metric: took 3.482298ms to run NodePressure ...
	I1206 09:10:18.473448  282948 start.go:242] waiting for startup goroutines ...
	I1206 09:10:18.473465  282948 start.go:247] waiting for cluster config update ...
	I1206 09:10:18.473481  282948 start.go:256] writing updated cluster config ...
	I1206 09:10:18.473815  282948 ssh_runner.go:195] Run: rm -f paused
	I1206 09:10:18.478814  282948 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:10:18.482928  282948 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-54hvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.488631  282948 pod_ready.go:94] pod "coredns-66bc5c9577-54hvq" is "Ready"
	I1206 09:10:18.488655  282948 pod_ready.go:86] duration metric: took 5.702275ms for pod "coredns-66bc5c9577-54hvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.491008  282948 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.495113  282948 pod_ready.go:94] pod "etcd-default-k8s-diff-port-213278" is "Ready"
	I1206 09:10:18.495141  282948 pod_ready.go:86] duration metric: took 4.107653ms for pod "etcd-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.497351  282948 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.501278  282948 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-213278" is "Ready"
	I1206 09:10:18.501299  282948 pod_ready.go:86] duration metric: took 3.927435ms for pod "kube-apiserver-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.503322  282948 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:18.883301  282948 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-213278" is "Ready"
	I1206 09:10:18.883332  282948 pod_ready.go:86] duration metric: took 379.989513ms for pod "kube-controller-manager-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:19.083594  282948 pod_ready.go:83] waiting for pod "kube-proxy-86f62" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:19.482824  282948 pod_ready.go:94] pod "kube-proxy-86f62" is "Ready"
	I1206 09:10:19.482856  282948 pod_ready.go:86] duration metric: took 399.230365ms for pod "kube-proxy-86f62" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:19.684066  282948 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:20.085054  282948 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-213278" is "Ready"
	I1206 09:10:20.085095  282948 pod_ready.go:86] duration metric: took 400.997798ms for pod "kube-scheduler-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:20.085110  282948 pod_ready.go:40] duration metric: took 1.606261127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:10:20.165804  282948 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:10:20.168697  282948 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-213278" cluster and "default" namespace by default
	I1206 09:10:18.961355  301707 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:10:19.032523  301707 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:10:19.032615  301707 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:10:19.218485  301707 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:10:19.517636  301707 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:10:19.544941  301707 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:10:19.707272  301707 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:10:20.906251  301707 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:10:20.907048  301707 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:10:20.913578  301707 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:10:17.041084  302585 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1206 09:10:17.045273  302585 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1206 09:10:17.046273  302585 api_server.go:141] control plane version: v1.34.2
	I1206 09:10:17.046299  302585 api_server.go:131] duration metric: took 1.005807944s to wait for apiserver health ...
	I1206 09:10:17.046319  302585 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:10:17.049871  302585 system_pods.go:59] 8 kube-system pods found
	I1206 09:10:17.049910  302585 system_pods.go:61] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:10:17.049922  302585 system_pods.go:61] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:10:17.049943  302585 system_pods.go:61] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:10:17.049953  302585 system_pods.go:61] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:10:17.049959  302585 system_pods.go:61] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:10:17.049967  302585 system_pods.go:61] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:10:17.049972  302585 system_pods.go:61] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:10:17.050017  302585 system_pods.go:61] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:10:17.050030  302585 system_pods.go:74] duration metric: took 3.703246ms to wait for pod list to return data ...
	I1206 09:10:17.050045  302585 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:10:17.052428  302585 default_sa.go:45] found service account: "default"
	I1206 09:10:17.052446  302585 default_sa.go:55] duration metric: took 2.389301ms for default service account to be created ...
	I1206 09:10:17.052453  302585 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:10:17.055163  302585 system_pods.go:86] 8 kube-system pods found
	I1206 09:10:17.055186  302585 system_pods.go:89] "coredns-66bc5c9577-x87kt" [652accc8-2082-4045-b568-7d4a68cd961c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:10:17.055194  302585 system_pods.go:89] "etcd-embed-certs-931091" [21f920fe-8cca-4071-9852-b8234b61a527] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:10:17.055205  302585 system_pods.go:89] "kindnet-kzpz2" [6ce4c876-e571-40c7-a764-c47426d42617] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:10:17.055211  302585 system_pods.go:89] "kube-apiserver-embed-certs-931091" [7007293c-a3cf-4fd7-9fe5-bc4c94a961d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:10:17.055216  302585 system_pods.go:89] "kube-controller-manager-embed-certs-931091" [02251ced-d14d-4b4a-bdbf-098f64d5ed86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:10:17.055223  302585 system_pods.go:89] "kube-proxy-9hp5d" [76177429-d0e3-430d-b316-9b5894760b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:10:17.055228  302585 system_pods.go:89] "kube-scheduler-embed-certs-931091" [c0f2cca7-46a9-47b6-b41b-e747d29ecf69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:10:17.055237  302585 system_pods.go:89] "storage-provisioner" [f06399c4-e82b-40d6-9eb5-8d37960bfdd4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:10:17.055243  302585 system_pods.go:126] duration metric: took 2.785428ms to wait for k8s-apps to be running ...
	I1206 09:10:17.055253  302585 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:10:17.055294  302585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:10:17.069211  302585 system_svc.go:56] duration metric: took 13.950722ms WaitForService to wait for kubelet
	I1206 09:10:17.069232  302585 kubeadm.go:587] duration metric: took 2.996352045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:17.069246  302585 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:10:17.072384  302585 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:10:17.072405  302585 node_conditions.go:123] node cpu capacity is 8
	I1206 09:10:17.072426  302585 node_conditions.go:105] duration metric: took 3.174723ms to run NodePressure ...
	I1206 09:10:17.072439  302585 start.go:242] waiting for startup goroutines ...
	I1206 09:10:17.072453  302585 start.go:247] waiting for cluster config update ...
	I1206 09:10:17.072469  302585 start.go:256] writing updated cluster config ...
	I1206 09:10:17.072749  302585 ssh_runner.go:195] Run: rm -f paused
	I1206 09:10:17.076811  302585 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:10:17.080851  302585 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x87kt" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:10:19.086355  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	W1206 09:10:21.089005  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	I1206 09:10:20.915391  301707 out.go:252]   - Booting up control plane ...
	I1206 09:10:20.915592  301707 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:10:20.915737  301707 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:10:20.916292  301707 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:10:20.936095  301707 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:10:20.936240  301707 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:10:20.945084  301707 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:10:20.945645  301707 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:10:20.945709  301707 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:10:21.092506  301707 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:10:21.092646  301707 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:10:22.096363  301707 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001770785s
	I1206 09:10:22.098222  301707 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:10:22.098382  301707 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:10:22.098520  301707 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:10:22.098632  301707 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:10:24.008522  301707 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.910274856s
	I1206 09:10:24.771301  301707 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.673166165s
	I1206 09:10:26.600011  301707 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501868398s
	I1206 09:10:26.617642  301707 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:10:26.629376  301707 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:10:26.638368  301707 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:10:26.638672  301707 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:10:26.652162  301707 kubeadm.go:319] [bootstrap-token] Using token: tg1qxo.kexdas6xz8qbkaf7
	W1206 09:10:23.091342  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	W1206 09:10:25.591188  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	I1206 09:10:26.653820  301707 out.go:252]   - Configuring RBAC rules ...
	I1206 09:10:26.653959  301707 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:10:26.659859  301707 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:10:26.668719  301707 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:10:26.671926  301707 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:10:26.675227  301707 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:10:26.679049  301707 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:10:27.007488  301707 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:10:27.426040  301707 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:10:28.008875  301707 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:10:28.010040  301707 kubeadm.go:319] 
	I1206 09:10:28.010187  301707 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:10:28.010216  301707 kubeadm.go:319] 
	I1206 09:10:28.010313  301707 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:10:28.010324  301707 kubeadm.go:319] 
	I1206 09:10:28.010359  301707 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:10:28.010440  301707 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:10:28.010498  301707 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:10:28.010502  301707 kubeadm.go:319] 
	I1206 09:10:28.010561  301707 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:10:28.010566  301707 kubeadm.go:319] 
	I1206 09:10:28.010628  301707 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:10:28.010633  301707 kubeadm.go:319] 
	I1206 09:10:28.010705  301707 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:10:28.010864  301707 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:10:28.010959  301707 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:10:28.010971  301707 kubeadm.go:319] 
	I1206 09:10:28.011106  301707 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:10:28.011343  301707 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:10:28.011359  301707 kubeadm.go:319] 
	I1206 09:10:28.011489  301707 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tg1qxo.kexdas6xz8qbkaf7 \
	I1206 09:10:28.011645  301707 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:10:28.011675  301707 kubeadm.go:319] 	--control-plane 
	I1206 09:10:28.011683  301707 kubeadm.go:319] 
	I1206 09:10:28.011749  301707 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:10:28.011757  301707 kubeadm.go:319] 
	I1206 09:10:28.011831  301707 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tg1qxo.kexdas6xz8qbkaf7 \
	I1206 09:10:28.011968  301707 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:10:28.015515  301707 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:10:28.015682  301707 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:10:28.015705  301707 cni.go:84] Creating CNI manager for "kindnet"
	I1206 09:10:28.018238  301707 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 09:10:28.019457  301707 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 09:10:28.024181  301707 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:10:28.024199  301707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 09:10:28.039060  301707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:10:28.295474  301707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:10:28.295594  301707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:28.295636  301707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-646473 minikube.k8s.io/updated_at=2025_12_06T09_10_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=kindnet-646473 minikube.k8s.io/primary=true
	I1206 09:10:28.307025  301707 ops.go:34] apiserver oom_adj: -16
	I1206 09:10:28.375930  301707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 06 09:10:18 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:18.115319302Z" level=info msg="Starting container: 4b503d0c792ed745c75d47f6dfe18b4959ce618780c85e7491227a1efa269c19" id=b878c172-30c3-4960-858b-ce9b11c8883a name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:10:18 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:18.117450091Z" level=info msg="Started container" PID=1838 containerID=4b503d0c792ed745c75d47f6dfe18b4959ce618780c85e7491227a1efa269c19 description=kube-system/coredns-66bc5c9577-54hvq/coredns id=b878c172-30c3-4960-858b-ce9b11c8883a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e0ed1b77fffcb0fe91b4c9807006d070f789e6053e3e8569581cea31a0342cf
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.714556551Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ce46e9ba-0fa8-4f2c-8fff-3d1a15055b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.714634684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.721744809Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9cf08a0ed2ccb5a96605c8b1f0dace4db6e68e3cfe87c0145ae87ba6a2f179c6 UID:88118ce1-5ebb-4136-900b-1521d34ca0ce NetNS:/var/run/netns/9faef9a0-346e-4ff0-b701-0e1affbc5ee3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00074c450}] Aliases:map[]}"
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.721964124Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.735885768Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9cf08a0ed2ccb5a96605c8b1f0dace4db6e68e3cfe87c0145ae87ba6a2f179c6 UID:88118ce1-5ebb-4136-900b-1521d34ca0ce NetNS:/var/run/netns/9faef9a0-346e-4ff0-b701-0e1affbc5ee3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00074c450}] Aliases:map[]}"
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.736556267Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.737965382Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.739211117Z" level=info msg="Ran pod sandbox 9cf08a0ed2ccb5a96605c8b1f0dace4db6e68e3cfe87c0145ae87ba6a2f179c6 with infra container: default/busybox/POD" id=ce46e9ba-0fa8-4f2c-8fff-3d1a15055b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.741133194Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06c1136c-bc54-42f8-8ce6-9b23035794ef name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.741447258Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=06c1136c-bc54-42f8-8ce6-9b23035794ef name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.741527065Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=06c1136c-bc54-42f8-8ce6-9b23035794ef name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.742552748Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f5c9ded-62af-4e32-813b-5f40767330fd name=/runtime.v1.ImageService/PullImage
	Dec 06 09:10:20 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:20.74449645Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.364216919Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7f5c9ded-62af-4e32-813b-5f40767330fd name=/runtime.v1.ImageService/PullImage
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.365062544Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aaf11e62-a551-4abb-a1d8-d2b58b556247 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.366619325Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=da5786e8-c6c0-4744-a7d2-179fa070ab55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.462803294Z" level=info msg="Creating container: default/busybox/busybox" id=63750df8-d81f-46d3-b558-1c86cafa7a40 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.462960609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.49692143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.497388034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.550756152Z" level=info msg="Created container cc33900be320cc2a9b621c58181425d49cef719ce413ee2ec03fd0731716aef5: default/busybox/busybox" id=63750df8-d81f-46d3-b558-1c86cafa7a40 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.551590622Z" level=info msg="Starting container: cc33900be320cc2a9b621c58181425d49cef719ce413ee2ec03fd0731716aef5" id=6b094bc6-fece-4bf2-bca6-8a0b696e5875 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:10:22 default-k8s-diff-port-213278 crio[767]: time="2025-12-06T09:10:22.553861259Z" level=info msg="Started container" PID=1915 containerID=cc33900be320cc2a9b621c58181425d49cef719ce413ee2ec03fd0731716aef5 description=default/busybox/busybox id=6b094bc6-fece-4bf2-bca6-8a0b696e5875 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9cf08a0ed2ccb5a96605c8b1f0dace4db6e68e3cfe87c0145ae87ba6a2f179c6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	cc33900be320c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   9cf08a0ed2ccb       busybox                                                default
	4b503d0c792ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   1e0ed1b77fffc       coredns-66bc5c9577-54hvq                               kube-system
	07bcc7d0e9357       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   176f9c0549c5f       storage-provisioner                                    kube-system
	e9eba2768f592       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      53 seconds ago       Running             kube-proxy                0                   c9bae62cc05e3       kube-proxy-86f62                                       kube-system
	f609280288429       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      53 seconds ago       Running             kindnet-cni               0                   ef6d87f716cbf       kindnet-4jw2t                                          kube-system
	d62caf364dc7c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      About a minute ago   Running             etcd                      0                   1258e648dac8b       etcd-default-k8s-diff-port-213278                      kube-system
	44200f13c72a1       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      About a minute ago   Running             kube-controller-manager   0                   feb1a46fd4791       kube-controller-manager-default-k8s-diff-port-213278   kube-system
	fb9981661d979       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      About a minute ago   Running             kube-scheduler            0                   32916fa9b6dbd       kube-scheduler-default-k8s-diff-port-213278            kube-system
	0a353e06f2a96       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      About a minute ago   Running             kube-apiserver            0                   f00526dd95961       kube-apiserver-default-k8s-diff-port-213278            kube-system
	
	
	==> coredns [4b503d0c792ed745c75d47f6dfe18b4959ce618780c85e7491227a1efa269c19] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53016 - 16523 "HINFO IN 357696435500260815.6606442901451309931. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.019760979s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-213278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-213278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=default-k8s-diff-port-213278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-213278
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:10:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:10:17 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:10:17 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:10:17 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:10:17 +0000   Sat, 06 Dec 2025 09:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-213278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                eec647d7-7697-4ad8-a7c7-fd1943fc3364
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-54hvq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-default-k8s-diff-port-213278                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-4jw2t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-213278             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-213278    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-86f62                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-213278             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-213278 event: Registered Node default-k8s-diff-port-213278 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-213278 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [d62caf364dc7c9adb785ad70dad2ff863f792c57d01f35a2e24369fcf9d6e2fe] <==
	{"level":"warn","ts":"2025-12-06T09:09:27.855652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.865912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.872629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.879005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.886490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.893453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.900910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.907234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.915785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.927101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.934347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.941413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.947828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.955608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.963246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.971330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.978170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.984910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:27.992197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:28.001175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:28.010087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:28.031462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:36692: read: connection reset by peer"}
	{"level":"warn","ts":"2025-12-06T09:09:28.036309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:28.043070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:09:28.095616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36746","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:10:31 up 53 min,  0 user,  load average: 4.95, 3.18, 2.08
	Linux default-k8s-diff-port-213278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f609280288429342f0503e3a118a8efada6997267cd56c84da030ce3e3724c91] <==
	I1206 09:09:37.107771       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:09:37.133073       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:09:37.133247       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:09:37.133264       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:09:37.133282       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:09:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:09:37.336281       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:09:37.336310       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:09:37.336320       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:09:37.336437       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:10:07.337388       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 09:10:07.337392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:10:07.337430       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 09:10:07.337444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1206 09:10:08.936673       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:10:08.936702       1 metrics.go:72] Registering metrics
	I1206 09:10:08.936780       1 controller.go:711] "Syncing nftables rules"
	I1206 09:10:17.343108       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:10:17.343154       1 main.go:301] handling current node
	I1206 09:10:27.336417       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:10:27.336453       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a353e06f2a96501fdeadf17c105c20173c610629f3ebaf2fb7ebcaece029b4d] <==
	I1206 09:09:28.680761       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:09:28.683104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:28.683178       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1206 09:09:28.689246       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:09:28.689493       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:28.855918       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:09:29.482772       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 09:09:29.486795       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 09:09:29.486809       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:09:30.032766       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:09:30.078517       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:09:30.189951       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 09:09:30.196173       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1206 09:09:30.197532       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:09:30.202642       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:09:30.513337       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:09:31.192435       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:09:31.202014       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 09:09:31.208803       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:09:35.615363       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:09:36.372434       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:36.380140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:09:36.522720       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1206 09:09:36.522720       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1206 09:10:29.515184       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:40852: use of closed network connection
	
	
	==> kube-controller-manager [44200f13c72a18f379abab26433fd093961d9405cf63a5ad608d23cf6a93cefa] <==
	I1206 09:09:35.511436       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:09:35.512688       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:09:35.512787       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:09:35.512876       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:09:35.512906       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:09:35.514053       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:09:35.514178       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:09:35.514196       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:09:35.514228       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:09:35.514252       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:09:35.514272       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:09:35.514595       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:09:35.515745       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:09:35.516531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:09:35.516552       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:09:35.517067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:09:35.517962       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:09:35.519158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:09:35.520264       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:09:35.521452       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:09:35.523787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:09:35.524828       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:09:35.531117       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:09:35.541333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:10:20.500512       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e9eba2768f592dba058b935d44a148e7db2a7ee41eca3e8ca2b54c869a4fe0cc] <==
	I1206 09:09:36.983430       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:09:37.049832       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:09:37.150899       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:09:37.150936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 09:09:37.151074       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:09:37.169913       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:09:37.169961       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:09:37.175163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:09:37.175640       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:09:37.175677       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:09:37.177216       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:09:37.177247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:09:37.177267       1 config.go:200] "Starting service config controller"
	I1206 09:09:37.177275       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:09:37.178039       1 config.go:309] "Starting node config controller"
	I1206 09:09:37.178229       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:09:37.178609       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:09:37.178167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:09:37.178746       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:09:37.278629       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:09:37.278660       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:09:37.278878       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [fb9981661d979802bca944a3a4a9ee0654cb9df27076beb2af973c5af6fe6703] <==
	E1206 09:09:28.533619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:09:28.533645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:09:28.533666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:09:28.533724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:09:28.533771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:09:28.534372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:09:28.534468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:09:28.534536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:09:28.534668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:09:28.534722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:09:29.395713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:09:29.426030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:09:29.485596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:09:29.528969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:09:29.550486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:09:29.589847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:09:29.595107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:09:29.640808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:09:29.712535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:09:29.712899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:09:29.734241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:09:29.756632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:09:29.784067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:09:29.807548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:09:32.832140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:09:32 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:32.133589    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-213278" podStartSLOduration=1.1335637219999999 podStartE2EDuration="1.133563722s" podCreationTimestamp="2025-12-06 09:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:32.120167717 +0000 UTC m=+1.162254188" watchObservedRunningTime="2025-12-06 09:09:32.133563722 +0000 UTC m=+1.175650181"
	Dec 06 09:09:32 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:32.148656    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-213278" podStartSLOduration=1.148632253 podStartE2EDuration="1.148632253s" podCreationTimestamp="2025-12-06 09:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:32.133821392 +0000 UTC m=+1.175907851" watchObservedRunningTime="2025-12-06 09:09:32.148632253 +0000 UTC m=+1.190718714"
	Dec 06 09:09:32 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:32.162920    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-213278" podStartSLOduration=1.16289943 podStartE2EDuration="1.16289943s" podCreationTimestamp="2025-12-06 09:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:32.148850721 +0000 UTC m=+1.190937182" watchObservedRunningTime="2025-12-06 09:09:32.16289943 +0000 UTC m=+1.204985889"
	Dec 06 09:09:35 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:35.575585    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:09:35 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:35.576253    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570272    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e817daf-c694-4ddf-8e08-85f504421f9b-cni-cfg\") pod \"kindnet-4jw2t\" (UID: \"1e817daf-c694-4ddf-8e08-85f504421f9b\") " pod="kube-system/kindnet-4jw2t"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570369    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc-xtables-lock\") pod \"kube-proxy-86f62\" (UID: \"6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc\") " pod="kube-system/kube-proxy-86f62"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570393    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc-lib-modules\") pod \"kube-proxy-86f62\" (UID: \"6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc\") " pod="kube-system/kube-proxy-86f62"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570412    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e817daf-c694-4ddf-8e08-85f504421f9b-xtables-lock\") pod \"kindnet-4jw2t\" (UID: \"1e817daf-c694-4ddf-8e08-85f504421f9b\") " pod="kube-system/kindnet-4jw2t"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570526    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc-kube-proxy\") pod \"kube-proxy-86f62\" (UID: \"6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc\") " pod="kube-system/kube-proxy-86f62"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570574    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfw9n\" (UniqueName: \"kubernetes.io/projected/6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc-kube-api-access-wfw9n\") pod \"kube-proxy-86f62\" (UID: \"6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc\") " pod="kube-system/kube-proxy-86f62"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570597    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e817daf-c694-4ddf-8e08-85f504421f9b-lib-modules\") pod \"kindnet-4jw2t\" (UID: \"1e817daf-c694-4ddf-8e08-85f504421f9b\") " pod="kube-system/kindnet-4jw2t"
	Dec 06 09:09:36 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:36.570650    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn76k\" (UniqueName: \"kubernetes.io/projected/1e817daf-c694-4ddf-8e08-85f504421f9b-kube-api-access-jn76k\") pod \"kindnet-4jw2t\" (UID: \"1e817daf-c694-4ddf-8e08-85f504421f9b\") " pod="kube-system/kindnet-4jw2t"
	Dec 06 09:09:37 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:37.096465    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4jw2t" podStartSLOduration=1.096444712 podStartE2EDuration="1.096444712s" podCreationTimestamp="2025-12-06 09:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:37.096347257 +0000 UTC m=+6.138433728" watchObservedRunningTime="2025-12-06 09:09:37.096444712 +0000 UTC m=+6.138531172"
	Dec 06 09:09:37 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:09:37.118879    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86f62" podStartSLOduration=1.11885836 podStartE2EDuration="1.11885836s" podCreationTimestamp="2025-12-06 09:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:09:37.108523746 +0000 UTC m=+6.150610204" watchObservedRunningTime="2025-12-06 09:09:37.11885836 +0000 UTC m=+6.160944820"
	Dec 06 09:10:17 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:17.733596    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:10:17 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:17.872536    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6pc\" (UniqueName: \"kubernetes.io/projected/f156a081-19f1-4a04-8234-24500867cf67-kube-api-access-br6pc\") pod \"coredns-66bc5c9577-54hvq\" (UID: \"f156a081-19f1-4a04-8234-24500867cf67\") " pod="kube-system/coredns-66bc5c9577-54hvq"
	Dec 06 09:10:17 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:17.872589    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b-tmp\") pod \"storage-provisioner\" (UID: \"4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b\") " pod="kube-system/storage-provisioner"
	Dec 06 09:10:17 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:17.872614    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpb9z\" (UniqueName: \"kubernetes.io/projected/4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b-kube-api-access-dpb9z\") pod \"storage-provisioner\" (UID: \"4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b\") " pod="kube-system/storage-provisioner"
	Dec 06 09:10:17 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:17.872687    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f156a081-19f1-4a04-8234-24500867cf67-config-volume\") pod \"coredns-66bc5c9577-54hvq\" (UID: \"f156a081-19f1-4a04-8234-24500867cf67\") " pod="kube-system/coredns-66bc5c9577-54hvq"
	Dec 06 09:10:18 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:18.201224    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.20118692 podStartE2EDuration="42.20118692s" podCreationTimestamp="2025-12-06 09:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:10:18.190961938 +0000 UTC m=+47.233048409" watchObservedRunningTime="2025-12-06 09:10:18.20118692 +0000 UTC m=+47.243273379"
	Dec 06 09:10:20 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:20.405298    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-54hvq" podStartSLOduration=44.40527149 podStartE2EDuration="44.40527149s" podCreationTimestamp="2025-12-06 09:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:10:18.201027772 +0000 UTC m=+47.243114231" watchObservedRunningTime="2025-12-06 09:10:20.40527149 +0000 UTC m=+49.447357950"
	Dec 06 09:10:20 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:20.488609    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6l2s\" (UniqueName: \"kubernetes.io/projected/88118ce1-5ebb-4136-900b-1521d34ca0ce-kube-api-access-g6l2s\") pod \"busybox\" (UID: \"88118ce1-5ebb-4136-900b-1521d34ca0ce\") " pod="default/busybox"
	Dec 06 09:10:23 default-k8s-diff-port-213278 kubelet[1316]: I1206 09:10:23.213493    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.589519336 podStartE2EDuration="3.213469969s" podCreationTimestamp="2025-12-06 09:10:20 +0000 UTC" firstStartedPulling="2025-12-06 09:10:20.742072574 +0000 UTC m=+49.784159028" lastFinishedPulling="2025-12-06 09:10:22.366023217 +0000 UTC m=+51.408109661" observedRunningTime="2025-12-06 09:10:23.213170515 +0000 UTC m=+52.255256974" watchObservedRunningTime="2025-12-06 09:10:23.213469969 +0000 UTC m=+52.255556428"
	Dec 06 09:10:29 default-k8s-diff-port-213278 kubelet[1316]: E1206 09:10:29.515076    1316 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60018->127.0.0.1:40293: write tcp 127.0.0.1:60018->127.0.0.1:40293: write: broken pipe
	
	
	==> storage-provisioner [07bcc7d0e9357f88e24d22c85abc002d19a2c7d885cacceea6962ddd0f0e50ba] <==
	I1206 09:10:18.129018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:10:18.138121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:10:18.138178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:10:18.140842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:18.147331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:10:18.147500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:10:18.147574       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"649337db-0a79-4b9c-a481-f9515237bbf3", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-213278_a43985b5-2c15-42d5-9105-37598ae58eb8 became leader
	I1206 09:10:18.147631       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-213278_a43985b5-2c15-42d5-9105-37598ae58eb8!
	W1206 09:10:18.149783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:18.156663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:10:18.248654       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-213278_a43985b5-2c15-42d5-9105-37598ae58eb8!
	W1206 09:10:20.161696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:20.169556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:22.172822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:22.183284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:24.187679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:24.194834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:26.198274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:26.203903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:28.208341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:28.212570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:30.216367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:30.220377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-931091 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-931091 --alsologtostderr -v=1: exit status 80 (2.138652763s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-931091 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:11:08.968267  319406 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:08.968566  319406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:08.968575  319406 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:08.968582  319406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:08.968879  319406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:11:08.969231  319406 out.go:368] Setting JSON to false
	I1206 09:11:08.969253  319406 mustload.go:66] Loading cluster: embed-certs-931091
	I1206 09:11:08.969759  319406 config.go:182] Loaded profile config "embed-certs-931091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:08.970427  319406 cli_runner.go:164] Run: docker container inspect embed-certs-931091 --format={{.State.Status}}
	I1206 09:11:08.997001  319406 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:11:08.997390  319406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:09.095703  319406 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-06 09:11:09.08206717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:09.098623  319406 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-931091 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:11:09.100700  319406 out.go:179] * Pausing node embed-certs-931091 ... 
	I1206 09:11:09.102031  319406 host.go:66] Checking if "embed-certs-931091" exists ...
	I1206 09:11:09.102410  319406 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:09.102453  319406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931091
	I1206 09:11:09.127026  319406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/embed-certs-931091/id_rsa Username:docker}
	I1206 09:11:09.237723  319406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:09.269620  319406 pause.go:52] kubelet running: true
	I1206 09:11:09.269823  319406 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:11:09.513140  319406 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:11:09.513224  319406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:11:09.617142  319406 cri.go:89] found id: "18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da"
	I1206 09:11:09.617172  319406 cri.go:89] found id: "e1cb1e6a344a1ed0d926d7ad94b48af2f3c736de156bc566f54758c87a09ee4e"
	I1206 09:11:09.617180  319406 cri.go:89] found id: "b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c"
	I1206 09:11:09.617185  319406 cri.go:89] found id: "37108c9bddfdb2c8b274f5250ba39d648e2efb7d93b47c46241aea6a5696a5cf"
	I1206 09:11:09.617189  319406 cri.go:89] found id: "edd89974c1046589be1d988771842ab006817d9cb74b7aa914e30d9c1988d400"
	I1206 09:11:09.617196  319406 cri.go:89] found id: "a846117bc72b733c4769ce32f31c23b76ae89d79e9fd9cf10be97e49bc2b4a74"
	I1206 09:11:09.617201  319406 cri.go:89] found id: "04174e56b26bf5e8534176ff57e230be3ed770891a615c3b75077b0468d06685"
	I1206 09:11:09.617206  319406 cri.go:89] found id: "9a3dc4e5add4a40d23fc1d867a32c27494d2f0aa5fe72049c03da86c84d3090b"
	I1206 09:11:09.617210  319406 cri.go:89] found id: "893b7522c648e625ee7cedf9142d4b1472b197d9b456f9a6939ff5eafca0b904"
	I1206 09:11:09.617233  319406 cri.go:89] found id: "bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	I1206 09:11:09.617238  319406 cri.go:89] found id: "682529937bb653f6ae7d2415238d63ec894db888c269bbed09b7929099eb766b"
	I1206 09:11:09.617242  319406 cri.go:89] found id: ""
	I1206 09:11:09.617313  319406 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:11:09.634568  319406 retry.go:31] will retry after 188.298275ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:11:09.823124  319406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:09.845412  319406 pause.go:52] kubelet running: false
	I1206 09:11:09.845486  319406 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:11:10.069604  319406 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:11:10.069717  319406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:11:10.157340  319406 cri.go:89] found id: "18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da"
	I1206 09:11:10.157368  319406 cri.go:89] found id: "e1cb1e6a344a1ed0d926d7ad94b48af2f3c736de156bc566f54758c87a09ee4e"
	I1206 09:11:10.157374  319406 cri.go:89] found id: "b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c"
	I1206 09:11:10.157379  319406 cri.go:89] found id: "37108c9bddfdb2c8b274f5250ba39d648e2efb7d93b47c46241aea6a5696a5cf"
	I1206 09:11:10.157384  319406 cri.go:89] found id: "edd89974c1046589be1d988771842ab006817d9cb74b7aa914e30d9c1988d400"
	I1206 09:11:10.157389  319406 cri.go:89] found id: "a846117bc72b733c4769ce32f31c23b76ae89d79e9fd9cf10be97e49bc2b4a74"
	I1206 09:11:10.157401  319406 cri.go:89] found id: "04174e56b26bf5e8534176ff57e230be3ed770891a615c3b75077b0468d06685"
	I1206 09:11:10.157407  319406 cri.go:89] found id: "9a3dc4e5add4a40d23fc1d867a32c27494d2f0aa5fe72049c03da86c84d3090b"
	I1206 09:11:10.157411  319406 cri.go:89] found id: "893b7522c648e625ee7cedf9142d4b1472b197d9b456f9a6939ff5eafca0b904"
	I1206 09:11:10.157419  319406 cri.go:89] found id: "bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	I1206 09:11:10.157432  319406 cri.go:89] found id: "682529937bb653f6ae7d2415238d63ec894db888c269bbed09b7929099eb766b"
	I1206 09:11:10.157437  319406 cri.go:89] found id: ""
	I1206 09:11:10.157486  319406 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:11:10.173866  319406 retry.go:31] will retry after 494.687021ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:10Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:11:10.669314  319406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:10.686316  319406 pause.go:52] kubelet running: false
	I1206 09:11:10.686378  319406 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:11:10.896280  319406 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:11:10.896376  319406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:11:10.998325  319406 cri.go:89] found id: "18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da"
	I1206 09:11:10.998368  319406 cri.go:89] found id: "e1cb1e6a344a1ed0d926d7ad94b48af2f3c736de156bc566f54758c87a09ee4e"
	I1206 09:11:10.998374  319406 cri.go:89] found id: "b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c"
	I1206 09:11:10.998379  319406 cri.go:89] found id: "37108c9bddfdb2c8b274f5250ba39d648e2efb7d93b47c46241aea6a5696a5cf"
	I1206 09:11:10.998384  319406 cri.go:89] found id: "edd89974c1046589be1d988771842ab006817d9cb74b7aa914e30d9c1988d400"
	I1206 09:11:10.998388  319406 cri.go:89] found id: "a846117bc72b733c4769ce32f31c23b76ae89d79e9fd9cf10be97e49bc2b4a74"
	I1206 09:11:10.998398  319406 cri.go:89] found id: "04174e56b26bf5e8534176ff57e230be3ed770891a615c3b75077b0468d06685"
	I1206 09:11:10.998403  319406 cri.go:89] found id: "9a3dc4e5add4a40d23fc1d867a32c27494d2f0aa5fe72049c03da86c84d3090b"
	I1206 09:11:10.998408  319406 cri.go:89] found id: "893b7522c648e625ee7cedf9142d4b1472b197d9b456f9a6939ff5eafca0b904"
	I1206 09:11:10.998433  319406 cri.go:89] found id: "bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	I1206 09:11:10.998442  319406 cri.go:89] found id: "682529937bb653f6ae7d2415238d63ec894db888c269bbed09b7929099eb766b"
	I1206 09:11:10.998446  319406 cri.go:89] found id: ""
	I1206 09:11:10.998491  319406 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:11:11.022164  319406 out.go:203] 
	W1206 09:11:11.023556  319406 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:11:11.023582  319406 out.go:285] * 
	* 
	W1206 09:11:11.030527  319406 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:11:11.032951  319406 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-931091 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-931091
helpers_test.go:243: (dbg) docker inspect embed-certs-931091:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63",
	        "Created": "2025-12-06T09:09:01.161536877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:10:07.136555493Z",
	            "FinishedAt": "2025-12-06T09:10:04.96786348Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/hostname",
	        "HostsPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/hosts",
	        "LogPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63-json.log",
	        "Name": "/embed-certs-931091",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-931091:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-931091",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63",
	                "LowerDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/merged",
	                "UpperDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/diff",
	                "WorkDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-931091",
	                "Source": "/var/lib/docker/volumes/embed-certs-931091/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-931091",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-931091",
	                "name.minikube.sigs.k8s.io": "embed-certs-931091",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "657393d437d7daaef6eb0a1cd7ce91aa3ac3278db512cd8ed528973189601d1f",
	            "SandboxKey": "/var/run/docker/netns/657393d437d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-931091": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70ecd367dba42d1818bd7c40275791d03131ddf8b1c44024d97d10092da13f1c",
	                    "EndpointID": "ced92d4b53e2858dc4b7f5db9baba991a30d491bcb16a9efea9c1bcf89a715c0",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "06:84:21:db:02:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-931091",
	                        "6aa3c5072933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091: exit status 2 (401.702745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-931091 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-931091 logs -n 25: (1.238571722s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p json-output-632983 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-632983       │ testUser │ v1.37.0 │ 06 Dec 25 08:49 UTC │ 06 Dec 25 08:49 UTC │
	│ pause   │ -p json-output-632983 --output=json --user=testUser                                                                     │ json-output-632983       │ testUser │ v1.37.0 │ 06 Dec 25 08:49 UTC │                     │
	│ unpause │ -p json-output-632983 --output=json --user=testUser                                                                     │ json-output-632983       │ testUser │ v1.37.0 │ 06 Dec 25 08:50 UTC │                     │
	│ stop    │ -p json-output-632983 --output=json --user=testUser                                                                     │ json-output-632983       │ testUser │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ delete  │ -p json-output-632983                                                                                                   │ json-output-632983       │ jenkins  │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ start   │ -p json-output-error-806429 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-806429 │ jenkins  │ v1.37.0 │ 06 Dec 25 08:50 UTC │                     │
	│ delete  │ -p json-output-error-806429                                                                                             │ json-output-error-806429 │ jenkins  │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ start   │ -p docker-network-913743 --network=                                                                                     │ docker-network-913743    │ jenkins  │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ delete  │ -p docker-network-913743                                                                                                │ docker-network-913743    │ jenkins  │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ start   │ -p docker-network-878590 --network=bridge                                                                               │ docker-network-878590    │ jenkins  │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:51 UTC │
	│ delete  │ -p docker-network-878590                                                                                                │ docker-network-878590    │ jenkins  │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ start   │ -p existing-network-911484 --network=existing-network                                                                   │ existing-network-911484  │ jenkins  │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ delete  │ -p existing-network-911484                                                                                              │ existing-network-911484  │ jenkins  │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ start   │ -p custom-subnet-376661 --subnet=192.168.60.0/24                                                                        │ custom-subnet-376661     │ jenkins  │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ delete  │ -p custom-subnet-376661                                                                                                 │ custom-subnet-376661     │ jenkins  │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:52 UTC │
	│ start   │ -p static-ip-850043 --static-ip=192.168.200.200                                                                         │ static-ip-850043         │ jenkins  │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ ip      │ static-ip-850043 ip                                                                                                     │ static-ip-850043         │ jenkins  │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ delete  │ -p static-ip-850043                                                                                                     │ static-ip-850043         │ jenkins  │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ start   │ -p first-809574 --driver=docker  --container-runtime=crio                                                               │ first-809574             │ jenkins  │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ ssh     │ -p kindnet-646473 sudo crictl ps --all                                                                                  │ kindnet-646473           │ jenkins  │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-931091 --alsologtostderr -v=1                                                                            │ embed-certs-931091       │ jenkins  │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p kindnet-646473 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                           │ kindnet-646473           │ jenkins  │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo ip a s                                                                                           │ kindnet-646473           │ jenkins  │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo ip r s                                                                                           │ kindnet-646473           │ jenkins  │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo iptables-save                                                                                    │ kindnet-646473           │ jenkins  │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:51.687484  315313 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:51.687779  315313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:51.687791  315313 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:51.687797  315313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:51.688009  315313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:10:51.688477  315313 out.go:368] Setting JSON to false
	I1206 09:10:51.689734  315313 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3203,"bootTime":1765009049,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:51.689790  315313 start.go:143] virtualization: kvm guest
	I1206 09:10:51.692037  315313 out.go:179] * [default-k8s-diff-port-213278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:51.693500  315313 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:10:51.693501  315313 notify.go:221] Checking for updates...
	I1206 09:10:51.697175  315313 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:51.698593  315313 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:51.699972  315313 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:10:51.701481  315313 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:51.703043  315313 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:51.704694  315313 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:51.705366  315313 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:51.730310  315313 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:10:51.730386  315313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:10:51.790142  315313 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:10:51.779905966 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:10:51.790261  315313 docker.go:319] overlay module found
	I1206 09:10:51.792291  315313 out.go:179] * Using the docker driver based on existing profile
	W1206 09:10:47.086648  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	W1206 09:10:49.587106  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	I1206 09:10:51.793608  315313 start.go:309] selected driver: docker
	I1206 09:10:51.793622  315313 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:51.793742  315313 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:51.794336  315313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:10:51.852011  315313 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:10:51.842150635 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:10:51.852298  315313 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:51.852339  315313 cni.go:84] Creating CNI manager for ""
	I1206 09:10:51.852415  315313 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:10:51.852470  315313 start.go:353] cluster config:
	{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:51.855111  315313 out.go:179] * Starting "default-k8s-diff-port-213278" primary control-plane node in "default-k8s-diff-port-213278" cluster
	I1206 09:10:51.856442  315313 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:10:51.858042  315313 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:10:51.859390  315313 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:51.859443  315313 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:10:51.859456  315313 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:51.859504  315313 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:10:51.859539  315313 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:51.859550  315313 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:10:51.859680  315313 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:10:51.879523  315313 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:10:51.879542  315313 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:10:51.879557  315313 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:10:51.879583  315313 start.go:360] acquireMachinesLock for default-k8s-diff-port-213278: {Name:mk866228eff8eb9f8cbf106e77f0dc837aabddf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:51.879634  315313 start.go:364] duration metric: took 34.837µs to acquireMachinesLock for "default-k8s-diff-port-213278"
	I1206 09:10:51.879679  315313 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:10:51.879689  315313 fix.go:54] fixHost starting: 
	I1206 09:10:51.879889  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:51.898040  315313 fix.go:112] recreateIfNeeded on default-k8s-diff-port-213278: state=Stopped err=<nil>
	W1206 09:10:51.898081  315313 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:10:54.497657  312610 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:10:54.497761  312610 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:10:54.497895  312610 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:10:54.497983  312610 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:10:54.498111  312610 kubeadm.go:319] OS: Linux
	I1206 09:10:54.498184  312610 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:10:54.498255  312610 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:10:54.498327  312610 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:10:54.498395  312610 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:10:54.498470  312610 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:10:54.498544  312610 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:10:54.498620  312610 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:10:54.498698  312610 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:10:54.498820  312610 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:10:54.498965  312610 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:10:54.499127  312610 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:10:54.499216  312610 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:10:54.501504  312610 out.go:252]   - Generating certificates and keys ...
	I1206 09:10:54.501598  312610 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:10:54.501694  312610 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:10:54.501817  312610 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:10:54.501905  312610 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:10:54.502053  312610 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:10:54.502135  312610 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:10:54.502187  312610 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:10:54.502372  312610 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:10:54.502469  312610 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:10:54.502637  312610 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:10:54.502738  312610 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:10:54.502843  312610 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:10:54.502912  312610 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:10:54.503019  312610 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:10:54.503091  312610 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:10:54.503173  312610 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:10:54.503250  312610 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:10:54.503352  312610 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:10:54.503441  312610 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:10:54.503567  312610 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:10:54.503661  312610 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:10:54.505199  312610 out.go:252]   - Booting up control plane ...
	I1206 09:10:54.505328  312610 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:10:54.505433  312610 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:10:54.505537  312610 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:10:54.505685  312610 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:10:54.505812  312610 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:10:54.505975  312610 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:10:54.506117  312610 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:10:54.506152  312610 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:10:54.506336  312610 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:10:54.506483  312610 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:10:54.506563  312610 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.792827ms
	I1206 09:10:54.506704  312610 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:10:54.506832  312610 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:10:54.506958  312610 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:10:54.507103  312610 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:10:54.507228  312610 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.431262327s
	I1206 09:10:54.507291  312610 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.290862862s
	I1206 09:10:54.507368  312610 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00162496s
	I1206 09:10:54.507486  312610 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:10:54.507661  312610 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:10:54.507748  312610 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:10:54.507969  312610 kubeadm.go:319] [mark-control-plane] Marking the node calico-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:10:54.508077  312610 kubeadm.go:319] [bootstrap-token] Using token: stnvv1.3a2zyuo6licwoyaf
	I1206 09:10:54.511048  312610 out.go:252]   - Configuring RBAC rules ...
	I1206 09:10:54.511185  312610 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:10:54.511312  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:10:54.511527  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:10:54.511713  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:10:54.511911  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:10:54.512063  312610 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:10:54.512261  312610 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:10:54.512339  312610 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:10:54.512407  312610 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:10:54.512417  312610 kubeadm.go:319] 
	I1206 09:10:54.512506  312610 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:10:54.512518  312610 kubeadm.go:319] 
	I1206 09:10:54.512636  312610 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:10:54.512653  312610 kubeadm.go:319] 
	I1206 09:10:54.512686  312610 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:10:54.512768  312610 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:10:54.512840  312610 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:10:54.512845  312610 kubeadm.go:319] 
	I1206 09:10:54.512921  312610 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:10:54.512927  312610 kubeadm.go:319] 
	I1206 09:10:54.512981  312610 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:10:54.513018  312610 kubeadm.go:319] 
	I1206 09:10:54.513083  312610 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:10:54.513215  312610 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:10:54.513314  312610 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:10:54.513324  312610 kubeadm.go:319] 
	I1206 09:10:54.513441  312610 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:10:54.513541  312610 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:10:54.513550  312610 kubeadm.go:319] 
	I1206 09:10:54.513665  312610 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token stnvv1.3a2zyuo6licwoyaf \
	I1206 09:10:54.513781  312610 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:10:54.513812  312610 kubeadm.go:319] 	--control-plane 
	I1206 09:10:54.513818  312610 kubeadm.go:319] 
	I1206 09:10:54.513929  312610 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:10:54.513939  312610 kubeadm.go:319] 
	I1206 09:10:54.514074  312610 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token stnvv1.3a2zyuo6licwoyaf \
	I1206 09:10:54.514218  312610 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:10:54.514232  312610 cni.go:84] Creating CNI manager for "calico"
	I1206 09:10:54.515788  312610 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1206 09:10:54.517196  312610 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:10:54.517219  312610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1206 09:10:54.535886  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:10:55.345858  312610 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:10:55.345957  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:55.346038  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-646473 minikube.k8s.io/updated_at=2025_12_06T09_10_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=calico-646473 minikube.k8s.io/primary=true
	I1206 09:10:55.436221  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:55.436259  312610 ops.go:34] apiserver oom_adj: -16
	I1206 09:10:51.900024  315313 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-213278" ...
	I1206 09:10:51.900102  315313 cli_runner.go:164] Run: docker start default-k8s-diff-port-213278
	I1206 09:10:52.158213  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:52.176915  315313 kic.go:430] container "default-k8s-diff-port-213278" state is running.
	I1206 09:10:52.177312  315313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213278
	I1206 09:10:52.196809  315313 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:10:52.197044  315313 machine.go:94] provisionDockerMachine start ...
	I1206 09:10:52.197104  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:52.216620  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:52.216874  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:52.216891  315313 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:10:52.217579  315313 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35720->127.0.0.1:33118: read: connection reset by peer
	I1206 09:10:55.371817  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-213278
	
	I1206 09:10:55.371846  315313 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-213278"
	I1206 09:10:55.371930  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.395799  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:55.396235  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:55.396269  315313 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-213278 && echo "default-k8s-diff-port-213278" | sudo tee /etc/hostname
	I1206 09:10:55.539803  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-213278
	
	I1206 09:10:55.539895  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.559243  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:55.559565  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:55.559599  315313 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-213278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-213278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-213278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:10:55.688673  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:10:55.688702  315313 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:10:55.688742  315313 ubuntu.go:190] setting up certificates
	I1206 09:10:55.688767  315313 provision.go:84] configureAuth start
	I1206 09:10:55.688841  315313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213278
	I1206 09:10:55.707587  315313 provision.go:143] copyHostCerts
	I1206 09:10:55.707648  315313 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:10:55.707665  315313 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:10:55.707739  315313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:10:55.707879  315313 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:10:55.707893  315313 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:10:55.708050  315313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:10:55.708187  315313 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:10:55.708202  315313 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:10:55.708251  315313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:10:55.708343  315313 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-213278 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-213278 localhost minikube]
	I1206 09:10:55.775266  315313 provision.go:177] copyRemoteCerts
	I1206 09:10:55.775325  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:10:55.775368  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.795034  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:55.892127  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:10:55.910165  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1206 09:10:55.930626  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:10:55.950072  315313 provision.go:87] duration metric: took 261.288758ms to configureAuth
	I1206 09:10:55.950094  315313 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:10:55.950310  315313 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:55.950444  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.971862  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:55.972094  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:55.972113  315313 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:10:56.589528  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:10:56.589568  315313 machine.go:97] duration metric: took 4.392505581s to provisionDockerMachine
	I1206 09:10:56.589581  315313 start.go:293] postStartSetup for "default-k8s-diff-port-213278" (driver="docker")
	I1206 09:10:56.589595  315313 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:10:56.589668  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:10:56.589714  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.610051  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	W1206 09:10:52.093124  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	W1206 09:10:54.587033  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	I1206 09:10:55.585975  302585 pod_ready.go:94] pod "coredns-66bc5c9577-x87kt" is "Ready"
	I1206 09:10:55.586008  302585 pod_ready.go:86] duration metric: took 38.505136087s for pod "coredns-66bc5c9577-x87kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.588734  302585 pod_ready.go:83] waiting for pod "etcd-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.593041  302585 pod_ready.go:94] pod "etcd-embed-certs-931091" is "Ready"
	I1206 09:10:55.593063  302585 pod_ready.go:86] duration metric: took 4.302801ms for pod "etcd-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.595093  302585 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.598822  302585 pod_ready.go:94] pod "kube-apiserver-embed-certs-931091" is "Ready"
	I1206 09:10:55.598845  302585 pod_ready.go:86] duration metric: took 3.728057ms for pod "kube-apiserver-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.601129  302585 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.784497  302585 pod_ready.go:94] pod "kube-controller-manager-embed-certs-931091" is "Ready"
	I1206 09:10:55.784528  302585 pod_ready.go:86] duration metric: took 183.382182ms for pod "kube-controller-manager-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.985153  302585 pod_ready.go:83] waiting for pod "kube-proxy-9hp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.384742  302585 pod_ready.go:94] pod "kube-proxy-9hp5d" is "Ready"
	I1206 09:10:56.384766  302585 pod_ready.go:86] duration metric: took 399.589861ms for pod "kube-proxy-9hp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.584419  302585 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.984660  302585 pod_ready.go:94] pod "kube-scheduler-embed-certs-931091" is "Ready"
	I1206 09:10:56.984687  302585 pod_ready.go:86] duration metric: took 400.242736ms for pod "kube-scheduler-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.984703  302585 pod_ready.go:40] duration metric: took 39.907860837s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:10:57.035048  302585 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:10:57.037511  302585 out.go:179] * Done! kubectl is now configured to use "embed-certs-931091" cluster and "default" namespace by default
	I1206 09:10:56.703702  315313 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:10:56.707285  315313 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:10:56.707321  315313 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:10:56.707330  315313 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:10:56.707377  315313 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:10:56.707452  315313 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:10:56.707534  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:10:56.715119  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:56.733640  315313 start.go:296] duration metric: took 144.043086ms for postStartSetup
	I1206 09:10:56.733732  315313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:10:56.733785  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.752147  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:56.845082  315313 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:10:56.850022  315313 fix.go:56] duration metric: took 4.970326552s for fixHost
	I1206 09:10:56.850051  315313 start.go:83] releasing machines lock for "default-k8s-diff-port-213278", held for 4.970405589s
	I1206 09:10:56.850128  315313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213278
	I1206 09:10:56.870603  315313 ssh_runner.go:195] Run: cat /version.json
	I1206 09:10:56.870656  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.870691  315313 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:10:56.870775  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.889848  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:56.890168  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:57.045224  315313 ssh_runner.go:195] Run: systemctl --version
	I1206 09:10:57.052155  315313 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:10:57.093508  315313 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:10:57.099046  315313 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:10:57.099122  315313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:10:57.108766  315313 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:10:57.108790  315313 start.go:496] detecting cgroup driver to use...
	I1206 09:10:57.108834  315313 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:10:57.108897  315313 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:10:57.124885  315313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:10:57.138708  315313 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:10:57.138763  315313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:10:57.156947  315313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:10:57.171079  315313 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:10:57.259168  315313 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:10:57.358065  315313 docker.go:234] disabling docker service ...
	I1206 09:10:57.358143  315313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:10:57.374164  315313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:10:57.387046  315313 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:10:57.476213  315313 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:10:57.564815  315313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:10:57.577172  315313 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:10:57.592109  315313 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:10:57.592178  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.601330  315313 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:10:57.601382  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.610246  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.618884  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.627831  315313 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:10:57.636223  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.645891  315313 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.654733  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.663666  315313 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:10:57.671204  315313 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:10:57.678491  315313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:57.762929  315313 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:10:57.910653  315313 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:10:57.910735  315313 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:10:57.914942  315313 start.go:564] Will wait 60s for crictl version
	I1206 09:10:57.915010  315313 ssh_runner.go:195] Run: which crictl
	I1206 09:10:57.918754  315313 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:10:57.944833  315313 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:10:57.944913  315313 ssh_runner.go:195] Run: crio --version
	I1206 09:10:57.974512  315313 ssh_runner.go:195] Run: crio --version
	I1206 09:10:58.014412  315313 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:10:58.020583  315313 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213278 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:10:58.041851  315313 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:10:58.046136  315313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:58.056513  315313 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:10:58.056605  315313 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:58.056641  315313 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:58.088905  315313 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:58.088926  315313 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:10:58.088967  315313 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:58.114515  315313 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:58.114537  315313 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:10:58.114544  315313 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1206 09:10:58.114623  315313 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-213278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:10:58.114689  315313 ssh_runner.go:195] Run: crio config
	I1206 09:10:58.161253  315313 cni.go:84] Creating CNI manager for ""
	I1206 09:10:58.161277  315313 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:10:58.161295  315313 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:10:58.161321  315313 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-213278 NodeName:default-k8s-diff-port-213278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:10:58.161474  315313 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-213278"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:10:58.161552  315313 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:10:58.169872  315313 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:10:58.169926  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:10:58.177525  315313 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 09:10:58.189809  315313 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:10:58.202467  315313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1206 09:10:58.214580  315313 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:10:58.218300  315313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:58.228621  315313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:58.327389  315313 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:58.352390  315313 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278 for IP: 192.168.85.2
	I1206 09:10:58.352408  315313 certs.go:195] generating shared ca certs ...
	I1206 09:10:58.352424  315313 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:58.352587  315313 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:10:58.352644  315313 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:10:58.352657  315313 certs.go:257] generating profile certs ...
	I1206 09:10:58.352781  315313 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.key
	I1206 09:10:58.352854  315313 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key.817b52b0
	I1206 09:10:58.352909  315313 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key
	I1206 09:10:58.353153  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:10:58.353210  315313 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:10:58.353233  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:10:58.353271  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:10:58.353303  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:10:58.353341  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:10:58.353404  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:58.354232  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:10:58.373433  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:10:58.392248  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:10:58.413630  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:10:58.436363  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1206 09:10:58.456681  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:10:58.473954  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:10:58.493330  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:10:58.511578  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:10:58.528902  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:10:58.546213  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:10:58.564434  315313 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:10:58.576669  315313 ssh_runner.go:195] Run: openssl version
	I1206 09:10:58.582846  315313 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.590389  315313 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:10:58.598299  315313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.601860  315313 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.601922  315313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.636617  315313 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:10:58.645679  315313 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.654050  315313 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:10:58.661724  315313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.665505  315313 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.665556  315313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.700574  315313 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:10:58.708268  315313 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.715643  315313 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:10:58.722968  315313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.726852  315313 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.726895  315313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.763854  315313 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:10:58.771869  315313 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:10:58.775629  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:10:58.810606  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:10:58.846292  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:10:58.895382  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:10:58.946630  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:10:59.008735  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:10:59.063204  315313 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:59.063322  315313 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:10:59.063380  315313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:10:59.092025  315313 cri.go:89] found id: "993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc"
	I1206 09:10:59.092048  315313 cri.go:89] found id: "a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756"
	I1206 09:10:59.092053  315313 cri.go:89] found id: "8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7"
	I1206 09:10:59.092059  315313 cri.go:89] found id: "877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13"
	I1206 09:10:59.092063  315313 cri.go:89] found id: ""
	I1206 09:10:59.092110  315313 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:10:59.108621  315313 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:10:59.108692  315313 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:10:59.119946  315313 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:10:59.119966  315313 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:10:59.120026  315313 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:10:59.129637  315313 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:10:59.130773  315313 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-213278" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:59.131571  315313 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-5617/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-213278" cluster setting kubeconfig missing "default-k8s-diff-port-213278" context setting]
	I1206 09:10:59.132698  315313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.134887  315313 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:10:59.144055  315313 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 09:10:59.144095  315313 kubeadm.go:602] duration metric: took 24.121886ms to restartPrimaryControlPlane
	I1206 09:10:59.144107  315313 kubeadm.go:403] duration metric: took 80.913986ms to StartCluster
	I1206 09:10:59.144132  315313 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.144206  315313 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:59.145927  315313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.146237  315313 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:10:59.146367  315313 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:10:59.146463  315313 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-213278"
	I1206 09:10:59.146475  315313 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:59.146480  315313 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-213278"
	W1206 09:10:59.146489  315313 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:10:59.146518  315313 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:10:59.146517  315313 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-213278"
	I1206 09:10:59.146533  315313 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-213278"
	W1206 09:10:59.146541  315313 addons.go:248] addon dashboard should already be in state true
	I1206 09:10:59.146557  315313 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:10:59.146898  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.146975  315313 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-213278"
	I1206 09:10:59.147035  315313 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-213278"
	I1206 09:10:59.147005  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.147308  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.148435  315313 out.go:179] * Verifying Kubernetes components...
	I1206 09:10:59.150070  315313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:59.174374  315313 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:10:59.174481  315313 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:10:59.175805  315313 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.175873  315313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:10:59.175850  315313 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:10:59.175946  315313 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-213278"
	W1206 09:10:59.175959  315313 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:10:59.175966  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.937145  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:56.436636  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:56.936559  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:57.437294  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:57.936374  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:58.437220  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:58.937188  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:59.437228  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:59.536872  312610 kubeadm.go:1114] duration metric: took 4.1909945s to wait for elevateKubeSystemPrivileges
	I1206 09:10:59.536909  312610 kubeadm.go:403] duration metric: took 14.722983517s to StartCluster
	I1206 09:10:59.536931  312610 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.537014  312610 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:59.539075  312610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.539396  312610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:10:59.539404  312610 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:10:59.539554  312610 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:10:59.539650  312610 addons.go:70] Setting storage-provisioner=true in profile "calico-646473"
	I1206 09:10:59.539673  312610 addons.go:239] Setting addon storage-provisioner=true in "calico-646473"
	I1206 09:10:59.539703  312610 host.go:66] Checking if "calico-646473" exists ...
	I1206 09:10:59.539730  312610 config.go:182] Loaded profile config "calico-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:59.539777  312610 addons.go:70] Setting default-storageclass=true in profile "calico-646473"
	I1206 09:10:59.539797  312610 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-646473"
	I1206 09:10:59.540222  312610 cli_runner.go:164] Run: docker container inspect calico-646473 --format={{.State.Status}}
	I1206 09:10:59.540289  312610 cli_runner.go:164] Run: docker container inspect calico-646473 --format={{.State.Status}}
	I1206 09:10:59.545328  312610 out.go:179] * Verifying Kubernetes components...
	I1206 09:10:59.547569  312610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:59.578361  312610 addons.go:239] Setting addon default-storageclass=true in "calico-646473"
	I1206 09:10:59.578613  312610 host.go:66] Checking if "calico-646473" exists ...
	I1206 09:10:59.580350  312610 cli_runner.go:164] Run: docker container inspect calico-646473 --format={{.State.Status}}
	I1206 09:10:59.585878  312610 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:10:59.586928  312610 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.587013  312610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:10:59.587136  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-646473
	I1206 09:10:59.617212  312610 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.618194  312610 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:10:59.618395  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-646473
	I1206 09:10:59.627866  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/calico-646473/id_rsa Username:docker}
	I1206 09:10:59.658133  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/calico-646473/id_rsa Username:docker}
	I1206 09:10:59.704679  312610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:10:59.756248  312610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:59.760450  312610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.801800  312610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.974070  312610 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:10:59.977086  312610 node_ready.go:35] waiting up to 15m0s for node "calico-646473" to be "Ready" ...
	I1206 09:11:00.181522  312610 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:11:00.182670  312610 addons.go:530] duration metric: took 643.115598ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:11:00.480324  312610 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-646473" context rescaled to 1 replicas
	I1206 09:10:59.175983  315313 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:10:59.176498  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.177118  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:10:59.177136  315313 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:10:59.177182  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:59.209949  315313 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.209979  315313 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:10:59.210072  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:59.210396  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:59.216317  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:59.245377  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:59.315436  315313 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:59.328968  315313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.330363  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:10:59.330384  315313 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:10:59.332748  315313 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-213278" to be "Ready" ...
	I1206 09:10:59.350823  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:10:59.350854  315313 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:10:59.356591  315313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.370336  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:10:59.370361  315313 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:10:59.396855  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:10:59.396879  315313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:10:59.416505  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:10:59.416571  315313 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:10:59.435080  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:10:59.435112  315313 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:10:59.455309  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:10:59.455349  315313 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:10:59.481751  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:10:59.481782  315313 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:10:59.504517  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:10:59.504545  315313 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:10:59.521548  315313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:11:00.768196  315313 node_ready.go:49] node "default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:00.768229  315313 node_ready.go:38] duration metric: took 1.435456237s for node "default-k8s-diff-port-213278" to be "Ready" ...
	I1206 09:11:00.768265  315313 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:00.768351  315313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:01.472126  315313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.143118433s)
	I1206 09:11:01.472244  315313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.115374804s)
	I1206 09:11:01.472282  315313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.950690285s)
	I1206 09:11:01.472509  315313 api_server.go:72] duration metric: took 2.326237153s to wait for apiserver process to appear ...
	I1206 09:11:01.472520  315313 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:11:01.472538  315313 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:11:01.473954  315313 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-213278 addons enable metrics-server
	
	I1206 09:11:01.479362  315313 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:11:01.479393  315313 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:11:01.480945  315313 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:11:01.482133  315313 addons.go:530] duration metric: took 2.335774958s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1206 09:11:01.981084  312610 node_ready.go:57] node "calico-646473" has "Ready":"False" status (will retry)
	I1206 09:11:03.980268  312610 node_ready.go:49] node "calico-646473" is "Ready"
	I1206 09:11:03.980298  312610 node_ready.go:38] duration metric: took 4.003166665s for node "calico-646473" to be "Ready" ...
	I1206 09:11:03.980324  312610 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:03.980377  312610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:03.995162  312610 api_server.go:72] duration metric: took 4.455726706s to wait for apiserver process to appear ...
	I1206 09:11:03.995192  312610 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:11:03.995213  312610 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:11:04.000224  312610 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:11:04.001467  312610 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:04.001496  312610 api_server.go:131] duration metric: took 6.297072ms to wait for apiserver health ...
	I1206 09:11:04.001507  312610 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:04.006004  312610 system_pods.go:59] 9 kube-system pods found
	I1206 09:11:04.006063  312610 system_pods.go:61] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.006076  312610 system_pods.go:61] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.006089  312610 system_pods.go:61] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.006095  312610 system_pods.go:61] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.006101  312610 system_pods.go:61] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.006112  312610 system_pods.go:61] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.006117  312610 system_pods.go:61] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.006131  312610 system_pods.go:61] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.006139  312610 system_pods.go:61] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.006146  312610 system_pods.go:74] duration metric: took 4.632445ms to wait for pod list to return data ...
	I1206 09:11:04.006156  312610 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:04.009146  312610 default_sa.go:45] found service account: "default"
	I1206 09:11:04.009175  312610 default_sa.go:55] duration metric: took 3.0087ms for default service account to be created ...
	I1206 09:11:04.009186  312610 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:04.012737  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:04.012765  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.012773  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.012780  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.012784  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.012788  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.012793  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.012796  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.012800  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.012805  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.012834  312610 retry.go:31] will retry after 286.404559ms: missing components: kube-dns
	I1206 09:11:04.305703  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:04.305736  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.305744  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.305753  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.305814  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.305828  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.305845  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.305858  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.305870  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.305891  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.305913  312610 retry.go:31] will retry after 341.917872ms: missing components: kube-dns
	I1206 09:11:04.653375  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:04.653408  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.653419  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.653482  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.653560  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.653573  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.653581  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.653591  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.653599  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.653621  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.653645  312610 retry.go:31] will retry after 441.833935ms: missing components: kube-dns
	I1206 09:11:05.101281  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:05.101328  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:05.101340  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:05.101430  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:05.101442  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:05.101450  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:05.101481  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:05.101496  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:05.101504  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:05.101509  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:05.101526  312610 retry.go:31] will retry after 485.497195ms: missing components: kube-dns
	I1206 09:11:05.592676  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:05.592724  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:05.592740  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:05.592750  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:05.592762  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:05.592769  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:05.592777  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:05.592786  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:05.592793  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:05.592801  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:05.592819  312610 retry.go:31] will retry after 566.418639ms: missing components: kube-dns
	I1206 09:11:01.972809  315313 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:11:01.978685  315313 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:11:01.978715  315313 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:11:02.473186  315313 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:11:02.478828  315313 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1206 09:11:02.480458  315313 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:02.480485  315313 api_server.go:131] duration metric: took 1.00795904s to wait for apiserver health ...
	I1206 09:11:02.480496  315313 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:02.484229  315313 system_pods.go:59] 8 kube-system pods found
	I1206 09:11:02.484280  315313 system_pods.go:61] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:02.484296  315313 system_pods.go:61] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:11:02.484312  315313 system_pods.go:61] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:11:02.484321  315313 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:11:02.484335  315313 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:02.484347  315313 system_pods.go:61] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:11:02.484360  315313 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:02.484368  315313 system_pods.go:61] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:02.484377  315313 system_pods.go:74] duration metric: took 3.872776ms to wait for pod list to return data ...
	I1206 09:11:02.484390  315313 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:02.486893  315313 default_sa.go:45] found service account: "default"
	I1206 09:11:02.486916  315313 default_sa.go:55] duration metric: took 2.520161ms for default service account to be created ...
	I1206 09:11:02.486927  315313 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:02.489926  315313 system_pods.go:86] 8 kube-system pods found
	I1206 09:11:02.489958  315313 system_pods.go:89] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:02.489971  315313 system_pods.go:89] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:11:02.489979  315313 system_pods.go:89] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running
	I1206 09:11:02.490019  315313 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:11:02.490032  315313 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:02.490041  315313 system_pods.go:89] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:11:02.490052  315313 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:02.490064  315313 system_pods.go:89] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:02.490077  315313 system_pods.go:126] duration metric: took 3.142256ms to wait for k8s-apps to be running ...
	I1206 09:11:02.490088  315313 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:11:02.490139  315313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:02.507849  315313 system_svc.go:56] duration metric: took 17.750336ms WaitForService to wait for kubelet
	I1206 09:11:02.507877  315313 kubeadm.go:587] duration metric: took 3.361605718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:02.507900  315313 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:11:02.510842  315313 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:11:02.510873  315313 node_conditions.go:123] node cpu capacity is 8
	I1206 09:11:02.510892  315313 node_conditions.go:105] duration metric: took 2.985295ms to run NodePressure ...
	I1206 09:11:02.510906  315313 start.go:242] waiting for startup goroutines ...
	I1206 09:11:02.510929  315313 start.go:247] waiting for cluster config update ...
	I1206 09:11:02.510943  315313 start.go:256] writing updated cluster config ...
	I1206 09:11:02.511286  315313 ssh_runner.go:195] Run: rm -f paused
	I1206 09:11:02.515867  315313 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:11:02.519770  315313 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-54hvq" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:11:04.526748  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:06.527128  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:06.164199  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:06.164241  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:06.164257  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:06.164265  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:06.164273  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:06.164291  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:06.164297  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:06.164304  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:06.164310  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:06.164317  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:06.164334  312610 retry.go:31] will retry after 787.981849ms: missing components: kube-dns
	I1206 09:11:06.960250  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:06.960289  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:06.960302  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:06.960311  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:06.960317  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:06.960324  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:06.960330  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:06.960337  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:06.960342  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:06.960347  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:06.960365  312610 retry.go:31] will retry after 1.055542155s: missing components: kube-dns
	I1206 09:11:08.020370  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:08.020409  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:08.020423  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:08.020433  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:08.020439  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:08.020446  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:08.020450  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:08.020456  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:08.020463  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:08.020467  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:08.020483  312610 retry.go:31] will retry after 1.081769528s: missing components: kube-dns
	I1206 09:11:09.111772  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:09.111813  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:09.111825  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:09.111835  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:09.111843  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:09.111851  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:09.111857  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:09.111862  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:09.111867  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:09.111873  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:09.111891  312610 retry.go:31] will retry after 1.327495758s: missing components: kube-dns
	I1206 09:11:10.444781  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:10.444821  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:10.444852  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:10.444865  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:10.444876  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:10.444885  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:10.444894  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:10.444903  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:10.444912  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:10.444918  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:10.444938  312610 retry.go:31] will retry after 2.037774599s: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.497178801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.49736663Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/56087d36d071ab91820385145ce3ae749ddfad6d74f93dc0f783f143d6ef5c14/merged/etc/passwd: no such file or directory"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.49740937Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/56087d36d071ab91820385145ce3ae749ddfad6d74f93dc0f783f143d6ef5c14/merged/etc/group: no such file or directory"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.497665622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.523824812Z" level=info msg="Created container 18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da: kube-system/storage-provisioner/storage-provisioner" id=e5c60e86-bbcc-4e74-952e-eb35d0536cd0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.524478844Z" level=info msg="Starting container: 18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da" id=980e4cf6-6a98-4089-b663-54c800867ca1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.526526045Z" level=info msg="Started container" PID=1721 containerID=18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da description=kube-system/storage-provisioner/storage-provisioner id=980e4cf6-6a98-4089-b663-54c800867ca1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c58bfbe5b25978dfec19c32b60558915afdac2dacc2667d1fa145764f00ba4e1
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.080344875Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.085705921Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.085741991Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.085762315Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.090451021Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.090554166Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.090582033Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.095105456Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.095133759Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.095158692Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.100180736Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.100213059Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.100233052Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.104442828Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.104468082Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.104491779Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.109265602Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.109294906Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	18df3e3592460       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   c58bfbe5b2597       storage-provisioner                          kube-system
	bb28e56c23678       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   65a2c3bbf703d       dashboard-metrics-scraper-6ffb444bf9-jhnrz   kubernetes-dashboard
	682529937bb65       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   5c13db0ef09c1       kubernetes-dashboard-855c9754f9-68gdp        kubernetes-dashboard
	e1cb1e6a344a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   f7a062009d92d       coredns-66bc5c9577-x87kt                     kube-system
	4a4d1fca96529       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   04192ee4b5268       busybox                                      default
	b82c97edbdf4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   c58bfbe5b2597       storage-provisioner                          kube-system
	37108c9bddfdb       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   a0d1a3a287672       kube-proxy-9hp5d                             kube-system
	edd89974c1046       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   53a19c0f92bec       kindnet-kzpz2                                kube-system
	a846117bc72b7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           58 seconds ago      Running             kube-scheduler              0                   db2ab96611679       kube-scheduler-embed-certs-931091            kube-system
	04174e56b26bf       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           58 seconds ago      Running             kube-apiserver              0                   d664999ead8e2       kube-apiserver-embed-certs-931091            kube-system
	9a3dc4e5add4a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   869d241ba4325       etcd-embed-certs-931091                      kube-system
	893b7522c648e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           58 seconds ago      Running             kube-controller-manager     0                   4f338e793f1c3       kube-controller-manager-embed-certs-931091   kube-system
	
	
	==> coredns [e1cb1e6a344a1ed0d926d7ad94b48af2f3c736de156bc566f54758c87a09ee4e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49021 - 63698 "HINFO IN 4375437974956359104.602222676637547894. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05027852s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-931091
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-931091
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=embed-certs-931091
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-931091
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-931091
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ca3719f5-d0e6-4020-bdb6-8b9c5b73b4fa
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-x87kt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-931091                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-kzpz2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-931091             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-931091    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-9hp5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-931091             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jhnrz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-68gdp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-931091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-931091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-931091 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node embed-certs-931091 event: Registered Node embed-certs-931091 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-931091 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-931091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-931091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-931091 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-931091 event: Registered Node embed-certs-931091 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [9a3dc4e5add4a40d23fc1d867a32c27494d2f0aa5fe72049c03da86c84d3090b] <==
	{"level":"warn","ts":"2025-12-06T09:10:14.734311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.744150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.754601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.763447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.773566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.782382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.793047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.802309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.810082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.818709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.828874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.838294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.847232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.856154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.864554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.874540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.894447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.901798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.911090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.921106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.929025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.944123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.953643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.962016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:15.028273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55578","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:12 up 53 min,  0 user,  load average: 4.29, 3.21, 2.13
	Linux embed-certs-931091 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [edd89974c1046589be1d988771842ab006817d9cb74b7aa914e30d9c1988d400] <==
	I1206 09:10:16.873912       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:10:16.874212       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:10:16.874394       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:10:16.874412       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:10:16.874439       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:10:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:10:17.079114       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:10:17.079167       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:10:17.079189       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:10:17.079390       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:10:47.080385       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:10:47.080385       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 09:10:47.080418       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 09:10:47.080438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 09:10:48.579865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:10:48.579896       1 metrics.go:72] Registering metrics
	I1206 09:10:48.580020       1 controller.go:711] "Syncing nftables rules"
	I1206 09:10:57.079967       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:10:57.080059       1 main.go:301] handling current node
	I1206 09:11:07.079674       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:11:07.079716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04174e56b26bf5e8534176ff57e230be3ed770891a615c3b75077b0468d06685] <==
	I1206 09:10:15.548737       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:10:15.548893       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 09:10:15.548927       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:10:15.549073       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:10:15.549084       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:10:15.549088       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:10:15.549093       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:10:15.555354       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:10:15.555682       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:10:15.563585       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:10:15.563709       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:10:15.567147       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:10:15.567177       1 policy_source.go:240] refreshing policies
	I1206 09:10:15.595781       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:10:15.850786       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:10:15.879510       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:10:15.900716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:10:15.907300       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:10:15.913832       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:10:15.962111       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.135.112"}
	I1206 09:10:15.973939       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.41.101"}
	I1206 09:10:16.453319       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:10:19.080358       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:10:19.377832       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:10:19.527767       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [893b7522c648e625ee7cedf9142d4b1472b197d9b456f9a6939ff5eafca0b904] <==
	I1206 09:10:18.924669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:10:18.924689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:10:18.924711       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:10:18.924720       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:10:18.924746       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:10:18.924755       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:10:18.924768       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:10:18.924878       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:10:18.925137       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:10:18.925233       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:10:18.926327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:10:18.926355       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:10:18.930798       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:10:18.930805       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:10:18.932010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:10:18.936148       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:10:18.939454       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:10:18.941719       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:10:18.942898       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:10:18.945226       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:10:18.950606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:10:18.950620       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:10:18.950629       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:10:18.950657       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:10:18.954019       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [37108c9bddfdb2c8b274f5250ba39d648e2efb7d93b47c46241aea6a5696a5cf] <==
	I1206 09:10:16.771632       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:10:16.846822       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:10:16.947430       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:10:16.947483       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:10:16.947605       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:10:16.966776       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:10:16.966858       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:10:16.973570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:10:16.974023       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:10:16.974066       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:10:16.975803       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:10:16.976961       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:10:16.976408       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:10:16.977049       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:10:16.976949       1 config.go:200] "Starting service config controller"
	I1206 09:10:16.977062       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:10:16.977576       1 config.go:309] "Starting node config controller"
	I1206 09:10:16.977597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:10:16.977604       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:10:17.077216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:10:17.077232       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:10:17.077250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a846117bc72b733c4769ce32f31c23b76ae89d79e9fd9cf10be97e49bc2b4a74] <==
	I1206 09:10:14.840155       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:10:15.470970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:10:15.471130       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:10:15.471155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:10:15.471166       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:10:15.512294       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:10:15.512713       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:10:15.518034       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:10:15.518080       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:10:15.519493       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:10:15.519725       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:10:15.619255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:10:19 embed-certs-931091 kubelet[732]: I1206 09:10:19.529232     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r544n\" (UniqueName: \"kubernetes.io/projected/54fbcd33-4737-4881-ab3e-5359f143b463-kube-api-access-r544n\") pod \"dashboard-metrics-scraper-6ffb444bf9-jhnrz\" (UID: \"54fbcd33-4737-4881-ab3e-5359f143b463\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz"
	Dec 06 09:10:23 embed-certs-931091 kubelet[732]: I1206 09:10:23.418680     732 scope.go:117] "RemoveContainer" containerID="946458d74e645fc5b3f9560ba0e099e512a25bc1754b686164fcf3f981740746"
	Dec 06 09:10:24 embed-certs-931091 kubelet[732]: I1206 09:10:24.423423     732 scope.go:117] "RemoveContainer" containerID="946458d74e645fc5b3f9560ba0e099e512a25bc1754b686164fcf3f981740746"
	Dec 06 09:10:24 embed-certs-931091 kubelet[732]: I1206 09:10:24.423614     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:24 embed-certs-931091 kubelet[732]: E1206 09:10:24.423825     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:25 embed-certs-931091 kubelet[732]: I1206 09:10:25.133868     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 06 09:10:25 embed-certs-931091 kubelet[732]: I1206 09:10:25.427640     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:25 embed-certs-931091 kubelet[732]: E1206 09:10:25.427844     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:26 embed-certs-931091 kubelet[732]: I1206 09:10:26.441807     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-68gdp" podStartSLOduration=1.138686453 podStartE2EDuration="7.441786727s" podCreationTimestamp="2025-12-06 09:10:19 +0000 UTC" firstStartedPulling="2025-12-06 09:10:19.777100277 +0000 UTC m=+6.509616277" lastFinishedPulling="2025-12-06 09:10:26.080200542 +0000 UTC m=+12.812716551" observedRunningTime="2025-12-06 09:10:26.44173749 +0000 UTC m=+13.174253505" watchObservedRunningTime="2025-12-06 09:10:26.441786727 +0000 UTC m=+13.174302739"
	Dec 06 09:10:31 embed-certs-931091 kubelet[732]: I1206 09:10:31.148805     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:31 embed-certs-931091 kubelet[732]: E1206 09:10:31.149020     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: I1206 09:10:43.359117     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: I1206 09:10:43.475591     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: I1206 09:10:43.475868     732 scope.go:117] "RemoveContainer" containerID="bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: E1206 09:10:43.476156     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:47 embed-certs-931091 kubelet[732]: I1206 09:10:47.489324     732 scope.go:117] "RemoveContainer" containerID="b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c"
	Dec 06 09:10:51 embed-certs-931091 kubelet[732]: I1206 09:10:51.148287     732 scope.go:117] "RemoveContainer" containerID="bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	Dec 06 09:10:51 embed-certs-931091 kubelet[732]: E1206 09:10:51.148542     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:11:02 embed-certs-931091 kubelet[732]: I1206 09:11:02.358557     732 scope.go:117] "RemoveContainer" containerID="bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	Dec 06 09:11:02 embed-certs-931091 kubelet[732]: E1206 09:11:02.358810     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:11:09 embed-certs-931091 kubelet[732]: I1206 09:11:09.496195     732 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: kubelet.service: Consumed 1.814s CPU time.
	
	
	==> kubernetes-dashboard [682529937bb653f6ae7d2415238d63ec894db888c269bbed09b7929099eb766b] <==
	2025/12/06 09:10:26 Using namespace: kubernetes-dashboard
	2025/12/06 09:10:26 Using in-cluster config to connect to apiserver
	2025/12/06 09:10:26 Using secret token for csrf signing
	2025/12/06 09:10:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:10:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:10:26 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:10:26 Generating JWE encryption key
	2025/12/06 09:10:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:10:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:10:26 Initializing JWE encryption key from synchronized object
	2025/12/06 09:10:26 Creating in-cluster Sidecar client
	2025/12/06 09:10:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:10:26 Serving insecurely on HTTP port: 9090
	2025/12/06 09:10:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:10:26 Starting overwatch
	
	
	==> storage-provisioner [18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da] <==
	I1206 09:10:47.538814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:10:47.546770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:10:47.546828       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:10:47.549023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:51.004607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:55.265956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:58.865199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:01.919275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:04.941692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:04.946627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:04.946792       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:11:04.946934       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-931091_643d66e7-b329-4d97-b79b-b53f61108e9e!
	I1206 09:11:04.946934       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45ccfad5-6c96-43b7-8f37-e4ba5bb38e67", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-931091_643d66e7-b329-4d97-b79b-b53f61108e9e became leader
	W1206 09:11:04.949469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:04.952821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:05.047954       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-931091_643d66e7-b329-4d97-b79b-b53f61108e9e!
	W1206 09:11:06.956113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:06.964115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:08.968809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:09.017152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:11.022770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:11.029265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c] <==
	I1206 09:10:16.740072       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:10:46.743688       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931091 -n embed-certs-931091
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931091 -n embed-certs-931091: exit status 2 (351.188898ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-931091 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-931091
helpers_test.go:243: (dbg) docker inspect embed-certs-931091:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63",
	        "Created": "2025-12-06T09:09:01.161536877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:10:07.136555493Z",
	            "FinishedAt": "2025-12-06T09:10:04.96786348Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/hostname",
	        "HostsPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/hosts",
	        "LogPath": "/var/lib/docker/containers/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63/6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63-json.log",
	        "Name": "/embed-certs-931091",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-931091:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-931091",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6aa3c5072933247d42f525fe898651ee029a2b885f73442f487370758ce75c63",
	                "LowerDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/merged",
	                "UpperDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/diff",
	                "WorkDir": "/var/lib/docker/overlay2/492a6762b26946f315cc89a7a55d44efe06478a5e7a791f7599ef4808c1bf213/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-931091",
	                "Source": "/var/lib/docker/volumes/embed-certs-931091/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-931091",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-931091",
	                "name.minikube.sigs.k8s.io": "embed-certs-931091",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "657393d437d7daaef6eb0a1cd7ce91aa3ac3278db512cd8ed528973189601d1f",
	            "SandboxKey": "/var/run/docker/netns/657393d437d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-931091": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70ecd367dba42d1818bd7c40275791d03131ddf8b1c44024d97d10092da13f1c",
	                    "EndpointID": "ced92d4b53e2858dc4b7f5db9baba991a30d491bcb16a9efea9c1bcf89a715c0",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "06:84:21:db:02:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-931091",
	                        "6aa3c5072933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091: exit status 2 (393.368524ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-931091 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-931091 logs -n 25: (1.579453952s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                       ARGS                                        │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p json-output-error-806429 --memory=3072 --output=json --wait=true --driver=fail │ json-output-error-806429 │ jenkins │ v1.37.0 │ 06 Dec 25 08:50 UTC │                     │
	│ delete  │ -p json-output-error-806429                                                       │ json-output-error-806429 │ jenkins │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ start   │ -p docker-network-913743 --network=                                               │ docker-network-913743    │ jenkins │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ delete  │ -p docker-network-913743                                                          │ docker-network-913743    │ jenkins │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:50 UTC │
	│ start   │ -p docker-network-878590 --network=bridge                                         │ docker-network-878590    │ jenkins │ v1.37.0 │ 06 Dec 25 08:50 UTC │ 06 Dec 25 08:51 UTC │
	│ delete  │ -p docker-network-878590                                                          │ docker-network-878590    │ jenkins │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ start   │ -p existing-network-911484 --network=existing-network                             │ existing-network-911484  │ jenkins │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ delete  │ -p existing-network-911484                                                        │ existing-network-911484  │ jenkins │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ start   │ -p custom-subnet-376661 --subnet=192.168.60.0/24                                  │ custom-subnet-376661     │ jenkins │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:51 UTC │
	│ delete  │ -p custom-subnet-376661                                                           │ custom-subnet-376661     │ jenkins │ v1.37.0 │ 06 Dec 25 08:51 UTC │ 06 Dec 25 08:52 UTC │
	│ start   │ -p static-ip-850043 --static-ip=192.168.200.200                                   │ static-ip-850043         │ jenkins │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ ip      │ static-ip-850043 ip                                                               │ static-ip-850043         │ jenkins │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ delete  │ -p static-ip-850043                                                               │ static-ip-850043         │ jenkins │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ start   │ -p first-809574 --driver=docker  --container-runtime=crio                         │ first-809574             │ jenkins │ v1.37.0 │ 06 Dec 25 08:52 UTC │ 06 Dec 25 08:52 UTC │
	│ ssh     │ -p kindnet-646473 sudo crictl ps --all                                            │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-931091 --alsologtostderr -v=1                                      │ embed-certs-931091       │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p kindnet-646473 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;     │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo ip a s                                                     │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo ip r s                                                     │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo iptables-save                                              │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo iptables -t nat -L -n -v                                   │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo systemctl status kubelet --all --full --no-pager           │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo systemctl cat kubelet --no-pager                           │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo journalctl -xeu kubelet --all --full --no-pager            │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p kindnet-646473 sudo cat /etc/kubernetes/kubelet.conf                           │ kindnet-646473           │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:51.687484  315313 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:51.687779  315313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:51.687791  315313 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:51.687797  315313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:51.688009  315313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:10:51.688477  315313 out.go:368] Setting JSON to false
	I1206 09:10:51.689734  315313 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3203,"bootTime":1765009049,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:51.689790  315313 start.go:143] virtualization: kvm guest
	I1206 09:10:51.692037  315313 out.go:179] * [default-k8s-diff-port-213278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:51.693500  315313 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:10:51.693501  315313 notify.go:221] Checking for updates...
	I1206 09:10:51.697175  315313 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:51.698593  315313 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:51.699972  315313 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:10:51.701481  315313 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:51.703043  315313 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:51.704694  315313 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:51.705366  315313 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:51.730310  315313 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:10:51.730386  315313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:10:51.790142  315313 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:10:51.779905966 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:10:51.790261  315313 docker.go:319] overlay module found
	I1206 09:10:51.792291  315313 out.go:179] * Using the docker driver based on existing profile
	W1206 09:10:47.086648  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	W1206 09:10:49.587106  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	I1206 09:10:51.793608  315313 start.go:309] selected driver: docker
	I1206 09:10:51.793622  315313 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:51.793742  315313 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:51.794336  315313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:10:51.852011  315313 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-06 09:10:51.842150635 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:10:51.852298  315313 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:51.852339  315313 cni.go:84] Creating CNI manager for ""
	I1206 09:10:51.852415  315313 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:10:51.852470  315313 start.go:353] cluster config:
	{Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:51.855111  315313 out.go:179] * Starting "default-k8s-diff-port-213278" primary control-plane node in "default-k8s-diff-port-213278" cluster
	I1206 09:10:51.856442  315313 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:10:51.858042  315313 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:10:51.859390  315313 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:51.859443  315313 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:10:51.859456  315313 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:51.859504  315313 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:10:51.859539  315313 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:51.859550  315313 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:10:51.859680  315313 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:10:51.879523  315313 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:10:51.879542  315313 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:10:51.879557  315313 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:10:51.879583  315313 start.go:360] acquireMachinesLock for default-k8s-diff-port-213278: {Name:mk866228eff8eb9f8cbf106e77f0dc837aabddf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:51.879634  315313 start.go:364] duration metric: took 34.837µs to acquireMachinesLock for "default-k8s-diff-port-213278"
	I1206 09:10:51.879679  315313 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:10:51.879689  315313 fix.go:54] fixHost starting: 
	I1206 09:10:51.879889  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:51.898040  315313 fix.go:112] recreateIfNeeded on default-k8s-diff-port-213278: state=Stopped err=<nil>
	W1206 09:10:51.898081  315313 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:10:54.497657  312610 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:10:54.497761  312610 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:10:54.497895  312610 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:10:54.497983  312610 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:10:54.498111  312610 kubeadm.go:319] OS: Linux
	I1206 09:10:54.498184  312610 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:10:54.498255  312610 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:10:54.498327  312610 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:10:54.498395  312610 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:10:54.498470  312610 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:10:54.498544  312610 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:10:54.498620  312610 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:10:54.498698  312610 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:10:54.498820  312610 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:10:54.498965  312610 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:10:54.499127  312610 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:10:54.499216  312610 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:10:54.501504  312610 out.go:252]   - Generating certificates and keys ...
	I1206 09:10:54.501598  312610 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:10:54.501694  312610 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:10:54.501817  312610 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:10:54.501905  312610 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:10:54.502053  312610 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:10:54.502135  312610 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:10:54.502187  312610 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:10:54.502372  312610 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:10:54.502469  312610 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:10:54.502637  312610 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-646473 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:10:54.502738  312610 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:10:54.502843  312610 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:10:54.502912  312610 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:10:54.503019  312610 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:10:54.503091  312610 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:10:54.503173  312610 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:10:54.503250  312610 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:10:54.503352  312610 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:10:54.503441  312610 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:10:54.503567  312610 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:10:54.503661  312610 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:10:54.505199  312610 out.go:252]   - Booting up control plane ...
	I1206 09:10:54.505328  312610 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:10:54.505433  312610 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:10:54.505537  312610 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:10:54.505685  312610 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:10:54.505812  312610 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:10:54.505975  312610 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:10:54.506117  312610 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:10:54.506152  312610 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:10:54.506336  312610 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:10:54.506483  312610 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:10:54.506563  312610 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.792827ms
	I1206 09:10:54.506704  312610 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:10:54.506832  312610 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1206 09:10:54.506958  312610 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:10:54.507103  312610 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:10:54.507228  312610 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.431262327s
	I1206 09:10:54.507291  312610 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.290862862s
	I1206 09:10:54.507368  312610 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00162496s
	I1206 09:10:54.507486  312610 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:10:54.507661  312610 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:10:54.507748  312610 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:10:54.507969  312610 kubeadm.go:319] [mark-control-plane] Marking the node calico-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:10:54.508077  312610 kubeadm.go:319] [bootstrap-token] Using token: stnvv1.3a2zyuo6licwoyaf
	I1206 09:10:54.511048  312610 out.go:252]   - Configuring RBAC rules ...
	I1206 09:10:54.511185  312610 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:10:54.511312  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:10:54.511527  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:10:54.511713  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:10:54.511911  312610 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:10:54.512063  312610 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:10:54.512261  312610 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:10:54.512339  312610 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:10:54.512407  312610 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:10:54.512417  312610 kubeadm.go:319] 
	I1206 09:10:54.512506  312610 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:10:54.512518  312610 kubeadm.go:319] 
	I1206 09:10:54.512636  312610 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:10:54.512653  312610 kubeadm.go:319] 
	I1206 09:10:54.512686  312610 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:10:54.512768  312610 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:10:54.512840  312610 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:10:54.512845  312610 kubeadm.go:319] 
	I1206 09:10:54.512921  312610 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:10:54.512927  312610 kubeadm.go:319] 
	I1206 09:10:54.512981  312610 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:10:54.513018  312610 kubeadm.go:319] 
	I1206 09:10:54.513083  312610 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:10:54.513215  312610 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:10:54.513314  312610 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:10:54.513324  312610 kubeadm.go:319] 
	I1206 09:10:54.513441  312610 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:10:54.513541  312610 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:10:54.513550  312610 kubeadm.go:319] 
	I1206 09:10:54.513665  312610 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token stnvv1.3a2zyuo6licwoyaf \
	I1206 09:10:54.513781  312610 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:10:54.513812  312610 kubeadm.go:319] 	--control-plane 
	I1206 09:10:54.513818  312610 kubeadm.go:319] 
	I1206 09:10:54.513929  312610 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:10:54.513939  312610 kubeadm.go:319] 
	I1206 09:10:54.514074  312610 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token stnvv1.3a2zyuo6licwoyaf \
	I1206 09:10:54.514218  312610 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:10:54.514232  312610 cni.go:84] Creating CNI manager for "calico"
	I1206 09:10:54.515788  312610 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1206 09:10:54.517196  312610 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:10:54.517219  312610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1206 09:10:54.535886  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:10:55.345858  312610 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:10:55.345957  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:55.346038  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-646473 minikube.k8s.io/updated_at=2025_12_06T09_10_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=calico-646473 minikube.k8s.io/primary=true
	I1206 09:10:55.436221  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:55.436259  312610 ops.go:34] apiserver oom_adj: -16
	I1206 09:10:51.900024  315313 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-213278" ...
	I1206 09:10:51.900102  315313 cli_runner.go:164] Run: docker start default-k8s-diff-port-213278
	I1206 09:10:52.158213  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:52.176915  315313 kic.go:430] container "default-k8s-diff-port-213278" state is running.
	I1206 09:10:52.177312  315313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213278
	I1206 09:10:52.196809  315313 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/config.json ...
	I1206 09:10:52.197044  315313 machine.go:94] provisionDockerMachine start ...
	I1206 09:10:52.197104  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:52.216620  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:52.216874  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:52.216891  315313 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:10:52.217579  315313 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35720->127.0.0.1:33118: read: connection reset by peer
	I1206 09:10:55.371817  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-213278
	
	I1206 09:10:55.371846  315313 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-213278"
	I1206 09:10:55.371930  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.395799  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:55.396235  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:55.396269  315313 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-213278 && echo "default-k8s-diff-port-213278" | sudo tee /etc/hostname
	I1206 09:10:55.539803  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-213278
	
	I1206 09:10:55.539895  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.559243  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:55.559565  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:55.559599  315313 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-213278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-213278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-213278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:10:55.688673  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:10:55.688702  315313 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:10:55.688742  315313 ubuntu.go:190] setting up certificates
	I1206 09:10:55.688767  315313 provision.go:84] configureAuth start
	I1206 09:10:55.688841  315313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213278
	I1206 09:10:55.707587  315313 provision.go:143] copyHostCerts
	I1206 09:10:55.707648  315313 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:10:55.707665  315313 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:10:55.707739  315313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:10:55.707879  315313 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:10:55.707893  315313 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:10:55.708050  315313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:10:55.708187  315313 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:10:55.708202  315313 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:10:55.708251  315313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:10:55.708343  315313 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-213278 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-213278 localhost minikube]
	I1206 09:10:55.775266  315313 provision.go:177] copyRemoteCerts
	I1206 09:10:55.775325  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:10:55.775368  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.795034  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:55.892127  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:10:55.910165  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1206 09:10:55.930626  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:10:55.950072  315313 provision.go:87] duration metric: took 261.288758ms to configureAuth
	I1206 09:10:55.950094  315313 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:10:55.950310  315313 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:55.950444  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.971862  315313 main.go:143] libmachine: Using SSH client type: native
	I1206 09:10:55.972094  315313 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 09:10:55.972113  315313 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:10:56.589528  315313 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:10:56.589568  315313 machine.go:97] duration metric: took 4.392505581s to provisionDockerMachine
	I1206 09:10:56.589581  315313 start.go:293] postStartSetup for "default-k8s-diff-port-213278" (driver="docker")
	I1206 09:10:56.589595  315313 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:10:56.589668  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:10:56.589714  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.610051  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	W1206 09:10:52.093124  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	W1206 09:10:54.587033  302585 pod_ready.go:104] pod "coredns-66bc5c9577-x87kt" is not "Ready", error: <nil>
	I1206 09:10:55.585975  302585 pod_ready.go:94] pod "coredns-66bc5c9577-x87kt" is "Ready"
	I1206 09:10:55.586008  302585 pod_ready.go:86] duration metric: took 38.505136087s for pod "coredns-66bc5c9577-x87kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.588734  302585 pod_ready.go:83] waiting for pod "etcd-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.593041  302585 pod_ready.go:94] pod "etcd-embed-certs-931091" is "Ready"
	I1206 09:10:55.593063  302585 pod_ready.go:86] duration metric: took 4.302801ms for pod "etcd-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.595093  302585 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.598822  302585 pod_ready.go:94] pod "kube-apiserver-embed-certs-931091" is "Ready"
	I1206 09:10:55.598845  302585 pod_ready.go:86] duration metric: took 3.728057ms for pod "kube-apiserver-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.601129  302585 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.784497  302585 pod_ready.go:94] pod "kube-controller-manager-embed-certs-931091" is "Ready"
	I1206 09:10:55.784528  302585 pod_ready.go:86] duration metric: took 183.382182ms for pod "kube-controller-manager-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:55.985153  302585 pod_ready.go:83] waiting for pod "kube-proxy-9hp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.384742  302585 pod_ready.go:94] pod "kube-proxy-9hp5d" is "Ready"
	I1206 09:10:56.384766  302585 pod_ready.go:86] duration metric: took 399.589861ms for pod "kube-proxy-9hp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.584419  302585 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.984660  302585 pod_ready.go:94] pod "kube-scheduler-embed-certs-931091" is "Ready"
	I1206 09:10:56.984687  302585 pod_ready.go:86] duration metric: took 400.242736ms for pod "kube-scheduler-embed-certs-931091" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:10:56.984703  302585 pod_ready.go:40] duration metric: took 39.907860837s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:10:57.035048  302585 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:10:57.037511  302585 out.go:179] * Done! kubectl is now configured to use "embed-certs-931091" cluster and "default" namespace by default
	I1206 09:10:56.703702  315313 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:10:56.707285  315313 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:10:56.707321  315313 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:10:56.707330  315313 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:10:56.707377  315313 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:10:56.707452  315313 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:10:56.707534  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:10:56.715119  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:56.733640  315313 start.go:296] duration metric: took 144.043086ms for postStartSetup
	I1206 09:10:56.733732  315313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:10:56.733785  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.752147  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:56.845082  315313 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:10:56.850022  315313 fix.go:56] duration metric: took 4.970326552s for fixHost
	I1206 09:10:56.850051  315313 start.go:83] releasing machines lock for "default-k8s-diff-port-213278", held for 4.970405589s
	I1206 09:10:56.850128  315313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213278
	I1206 09:10:56.870603  315313 ssh_runner.go:195] Run: cat /version.json
	I1206 09:10:56.870656  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.870691  315313 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:10:56.870775  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:56.889848  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:56.890168  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:57.045224  315313 ssh_runner.go:195] Run: systemctl --version
	I1206 09:10:57.052155  315313 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:10:57.093508  315313 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:10:57.099046  315313 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:10:57.099122  315313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:10:57.108766  315313 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:10:57.108790  315313 start.go:496] detecting cgroup driver to use...
	I1206 09:10:57.108834  315313 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:10:57.108897  315313 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:10:57.124885  315313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:10:57.138708  315313 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:10:57.138763  315313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:10:57.156947  315313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:10:57.171079  315313 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:10:57.259168  315313 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:10:57.358065  315313 docker.go:234] disabling docker service ...
	I1206 09:10:57.358143  315313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:10:57.374164  315313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:10:57.387046  315313 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:10:57.476213  315313 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:10:57.564815  315313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:10:57.577172  315313 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:10:57.592109  315313 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:10:57.592178  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.601330  315313 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:10:57.601382  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.610246  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.618884  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.627831  315313 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:10:57.636223  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.645891  315313 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.654733  315313 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:10:57.663666  315313 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:10:57.671204  315313 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:10:57.678491  315313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:57.762929  315313 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:10:57.910653  315313 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:10:57.910735  315313 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:10:57.914942  315313 start.go:564] Will wait 60s for crictl version
	I1206 09:10:57.915010  315313 ssh_runner.go:195] Run: which crictl
	I1206 09:10:57.918754  315313 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:10:57.944833  315313 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:10:57.944913  315313 ssh_runner.go:195] Run: crio --version
	I1206 09:10:57.974512  315313 ssh_runner.go:195] Run: crio --version
	I1206 09:10:58.014412  315313 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:10:58.020583  315313 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-213278 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:10:58.041851  315313 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:10:58.046136  315313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:58.056513  315313 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:10:58.056605  315313 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:10:58.056641  315313 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:58.088905  315313 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:58.088926  315313 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:10:58.088967  315313 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:10:58.114515  315313 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:10:58.114537  315313 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:10:58.114544  315313 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1206 09:10:58.114623  315313 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-213278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:10:58.114689  315313 ssh_runner.go:195] Run: crio config
	I1206 09:10:58.161253  315313 cni.go:84] Creating CNI manager for ""
	I1206 09:10:58.161277  315313 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 09:10:58.161295  315313 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:10:58.161321  315313 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-213278 NodeName:default-k8s-diff-port-213278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:10:58.161474  315313 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-213278"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:10:58.161552  315313 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:10:58.169872  315313 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:10:58.169926  315313 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:10:58.177525  315313 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 09:10:58.189809  315313 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:10:58.202467  315313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1206 09:10:58.214580  315313 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:10:58.218300  315313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:10:58.228621  315313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:58.327389  315313 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:58.352390  315313 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278 for IP: 192.168.85.2
	I1206 09:10:58.352408  315313 certs.go:195] generating shared ca certs ...
	I1206 09:10:58.352424  315313 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:58.352587  315313 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:10:58.352644  315313 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:10:58.352657  315313 certs.go:257] generating profile certs ...
	I1206 09:10:58.352781  315313 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/client.key
	I1206 09:10:58.352854  315313 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key.817b52b0
	I1206 09:10:58.352909  315313 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key
	I1206 09:10:58.353153  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:10:58.353210  315313 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:10:58.353233  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:10:58.353271  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:10:58.353303  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:10:58.353341  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:10:58.353404  315313 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:10:58.354232  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:10:58.373433  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:10:58.392248  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:10:58.413630  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:10:58.436363  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1206 09:10:58.456681  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:10:58.473954  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:10:58.493330  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/default-k8s-diff-port-213278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:10:58.511578  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:10:58.528902  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:10:58.546213  315313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:10:58.564434  315313 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:10:58.576669  315313 ssh_runner.go:195] Run: openssl version
	I1206 09:10:58.582846  315313 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.590389  315313 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:10:58.598299  315313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.601860  315313 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.601922  315313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:10:58.636617  315313 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:10:58.645679  315313 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.654050  315313 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:10:58.661724  315313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.665505  315313 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.665556  315313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:10:58.700574  315313 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:10:58.708268  315313 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.715643  315313 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:10:58.722968  315313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.726852  315313 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.726895  315313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:10:58.763854  315313 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:10:58.771869  315313 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:10:58.775629  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:10:58.810606  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:10:58.846292  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:10:58.895382  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:10:58.946630  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:10:59.008735  315313 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:10:59.063204  315313 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-213278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-213278 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:59.063322  315313 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:10:59.063380  315313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:10:59.092025  315313 cri.go:89] found id: "993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc"
	I1206 09:10:59.092048  315313 cri.go:89] found id: "a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756"
	I1206 09:10:59.092053  315313 cri.go:89] found id: "8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7"
	I1206 09:10:59.092059  315313 cri.go:89] found id: "877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13"
	I1206 09:10:59.092063  315313 cri.go:89] found id: ""
	I1206 09:10:59.092110  315313 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 09:10:59.108621  315313 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:10:59.108692  315313 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:10:59.119946  315313 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:10:59.119966  315313 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:10:59.120026  315313 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:10:59.129637  315313 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:10:59.130773  315313 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-213278" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:59.131571  315313 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-5617/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-213278" cluster setting kubeconfig missing "default-k8s-diff-port-213278" context setting]
	I1206 09:10:59.132698  315313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.134887  315313 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:10:59.144055  315313 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 09:10:59.144095  315313 kubeadm.go:602] duration metric: took 24.121886ms to restartPrimaryControlPlane
	I1206 09:10:59.144107  315313 kubeadm.go:403] duration metric: took 80.913986ms to StartCluster
	I1206 09:10:59.144132  315313 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.144206  315313 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:59.145927  315313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.146237  315313 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:10:59.146367  315313 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:10:59.146463  315313 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-213278"
	I1206 09:10:59.146475  315313 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:59.146480  315313 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-213278"
	W1206 09:10:59.146489  315313 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:10:59.146518  315313 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:10:59.146517  315313 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-213278"
	I1206 09:10:59.146533  315313 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-213278"
	W1206 09:10:59.146541  315313 addons.go:248] addon dashboard should already be in state true
	I1206 09:10:59.146557  315313 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:10:59.146898  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.146975  315313 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-213278"
	I1206 09:10:59.147035  315313 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-213278"
	I1206 09:10:59.147005  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.147308  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.148435  315313 out.go:179] * Verifying Kubernetes components...
	I1206 09:10:59.150070  315313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:59.174374  315313 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 09:10:59.174481  315313 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:10:59.175805  315313 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.175873  315313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:10:59.175850  315313 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 09:10:59.175946  315313 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-213278"
	W1206 09:10:59.175959  315313 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:10:59.175966  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:55.937145  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:56.436636  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:56.936559  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:57.437294  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:57.936374  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:58.437220  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:58.937188  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:59.437228  312610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:10:59.536872  312610 kubeadm.go:1114] duration metric: took 4.1909945s to wait for elevateKubeSystemPrivileges
	I1206 09:10:59.536909  312610 kubeadm.go:403] duration metric: took 14.722983517s to StartCluster
	I1206 09:10:59.536931  312610 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.537014  312610 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:10:59.539075  312610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:59.539396  312610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:10:59.539404  312610 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:10:59.539554  312610 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:10:59.539650  312610 addons.go:70] Setting storage-provisioner=true in profile "calico-646473"
	I1206 09:10:59.539673  312610 addons.go:239] Setting addon storage-provisioner=true in "calico-646473"
	I1206 09:10:59.539703  312610 host.go:66] Checking if "calico-646473" exists ...
	I1206 09:10:59.539730  312610 config.go:182] Loaded profile config "calico-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:10:59.539777  312610 addons.go:70] Setting default-storageclass=true in profile "calico-646473"
	I1206 09:10:59.539797  312610 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-646473"
	I1206 09:10:59.540222  312610 cli_runner.go:164] Run: docker container inspect calico-646473 --format={{.State.Status}}
	I1206 09:10:59.540289  312610 cli_runner.go:164] Run: docker container inspect calico-646473 --format={{.State.Status}}
	I1206 09:10:59.545328  312610 out.go:179] * Verifying Kubernetes components...
	I1206 09:10:59.547569  312610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:10:59.578361  312610 addons.go:239] Setting addon default-storageclass=true in "calico-646473"
	I1206 09:10:59.578613  312610 host.go:66] Checking if "calico-646473" exists ...
	I1206 09:10:59.580350  312610 cli_runner.go:164] Run: docker container inspect calico-646473 --format={{.State.Status}}
	I1206 09:10:59.585878  312610 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:10:59.586928  312610 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.587013  312610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:10:59.587136  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-646473
	I1206 09:10:59.617212  312610 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.618194  312610 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:10:59.618395  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-646473
	I1206 09:10:59.627866  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/calico-646473/id_rsa Username:docker}
	I1206 09:10:59.658133  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/calico-646473/id_rsa Username:docker}
	I1206 09:10:59.704679  312610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:10:59.756248  312610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:59.760450  312610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.801800  312610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.974070  312610 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1206 09:10:59.977086  312610 node_ready.go:35] waiting up to 15m0s for node "calico-646473" to be "Ready" ...
	I1206 09:11:00.181522  312610 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:11:00.182670  312610 addons.go:530] duration metric: took 643.115598ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:11:00.480324  312610 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-646473" context rescaled to 1 replicas
	I1206 09:10:59.175983  315313 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:10:59.176498  315313 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:10:59.177118  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 09:10:59.177136  315313 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 09:10:59.177182  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:59.209949  315313 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.209979  315313 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:10:59.210072  315313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:10:59.210396  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:59.216317  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:59.245377  315313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:10:59.315436  315313 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:10:59.328968  315313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:10:59.330363  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 09:10:59.330384  315313 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 09:10:59.332748  315313 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-213278" to be "Ready" ...
	I1206 09:10:59.350823  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 09:10:59.350854  315313 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 09:10:59.356591  315313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:10:59.370336  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 09:10:59.370361  315313 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 09:10:59.396855  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 09:10:59.396879  315313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 09:10:59.416505  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 09:10:59.416571  315313 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 09:10:59.435080  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 09:10:59.435112  315313 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 09:10:59.455309  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 09:10:59.455349  315313 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 09:10:59.481751  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 09:10:59.481782  315313 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 09:10:59.504517  315313 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:10:59.504545  315313 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 09:10:59.521548  315313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 09:11:00.768196  315313 node_ready.go:49] node "default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:00.768229  315313 node_ready.go:38] duration metric: took 1.435456237s for node "default-k8s-diff-port-213278" to be "Ready" ...
	I1206 09:11:00.768265  315313 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:00.768351  315313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:01.472126  315313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.143118433s)
	I1206 09:11:01.472244  315313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.115374804s)
	I1206 09:11:01.472282  315313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.950690285s)
	I1206 09:11:01.472509  315313 api_server.go:72] duration metric: took 2.326237153s to wait for apiserver process to appear ...
	I1206 09:11:01.472520  315313 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:11:01.472538  315313 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:11:01.473954  315313 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-213278 addons enable metrics-server
	
	I1206 09:11:01.479362  315313 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:11:01.479393  315313 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:11:01.480945  315313 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1206 09:11:01.482133  315313 addons.go:530] duration metric: took 2.335774958s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1206 09:11:01.981084  312610 node_ready.go:57] node "calico-646473" has "Ready":"False" status (will retry)
	I1206 09:11:03.980268  312610 node_ready.go:49] node "calico-646473" is "Ready"
	I1206 09:11:03.980298  312610 node_ready.go:38] duration metric: took 4.003166665s for node "calico-646473" to be "Ready" ...
	I1206 09:11:03.980324  312610 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:03.980377  312610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:03.995162  312610 api_server.go:72] duration metric: took 4.455726706s to wait for apiserver process to appear ...
	I1206 09:11:03.995192  312610 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:11:03.995213  312610 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1206 09:11:04.000224  312610 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1206 09:11:04.001467  312610 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:04.001496  312610 api_server.go:131] duration metric: took 6.297072ms to wait for apiserver health ...
	I1206 09:11:04.001507  312610 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:04.006004  312610 system_pods.go:59] 9 kube-system pods found
	I1206 09:11:04.006063  312610 system_pods.go:61] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.006076  312610 system_pods.go:61] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.006089  312610 system_pods.go:61] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.006095  312610 system_pods.go:61] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.006101  312610 system_pods.go:61] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.006112  312610 system_pods.go:61] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.006117  312610 system_pods.go:61] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.006131  312610 system_pods.go:61] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.006139  312610 system_pods.go:61] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.006146  312610 system_pods.go:74] duration metric: took 4.632445ms to wait for pod list to return data ...
	I1206 09:11:04.006156  312610 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:04.009146  312610 default_sa.go:45] found service account: "default"
	I1206 09:11:04.009175  312610 default_sa.go:55] duration metric: took 3.0087ms for default service account to be created ...
	I1206 09:11:04.009186  312610 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:04.012737  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:04.012765  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.012773  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.012780  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.012784  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.012788  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.012793  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.012796  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.012800  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.012805  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.012834  312610 retry.go:31] will retry after 286.404559ms: missing components: kube-dns
	I1206 09:11:04.305703  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:04.305736  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.305744  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.305753  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.305814  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.305828  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.305845  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.305858  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.305870  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.305891  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.305913  312610 retry.go:31] will retry after 341.917872ms: missing components: kube-dns
	I1206 09:11:04.653375  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:04.653408  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:04.653419  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:04.653482  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:04.653560  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:04.653573  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:04.653581  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:04.653591  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:04.653599  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:04.653621  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:04.653645  312610 retry.go:31] will retry after 441.833935ms: missing components: kube-dns
	I1206 09:11:05.101281  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:05.101328  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:05.101340  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:05.101430  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:05.101442  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:05.101450  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:05.101481  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:05.101496  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:05.101504  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:05.101509  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:05.101526  312610 retry.go:31] will retry after 485.497195ms: missing components: kube-dns
	I1206 09:11:05.592676  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:05.592724  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:05.592740  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:05.592750  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:05.592762  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:05.592769  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:05.592777  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:05.592786  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:05.592793  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:05.592801  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:05.592819  312610 retry.go:31] will retry after 566.418639ms: missing components: kube-dns
	I1206 09:11:01.972809  315313 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:11:01.978685  315313 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:11:01.978715  315313 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:11:02.473186  315313 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1206 09:11:02.478828  315313 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1206 09:11:02.480458  315313 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:02.480485  315313 api_server.go:131] duration metric: took 1.00795904s to wait for apiserver health ...
	I1206 09:11:02.480496  315313 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:02.484229  315313 system_pods.go:59] 8 kube-system pods found
	I1206 09:11:02.484280  315313 system_pods.go:61] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:02.484296  315313 system_pods.go:61] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:11:02.484312  315313 system_pods.go:61] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 09:11:02.484321  315313 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:11:02.484335  315313 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:02.484347  315313 system_pods.go:61] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:11:02.484360  315313 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:02.484368  315313 system_pods.go:61] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:02.484377  315313 system_pods.go:74] duration metric: took 3.872776ms to wait for pod list to return data ...
	I1206 09:11:02.484390  315313 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:02.486893  315313 default_sa.go:45] found service account: "default"
	I1206 09:11:02.486916  315313 default_sa.go:55] duration metric: took 2.520161ms for default service account to be created ...
	I1206 09:11:02.486927  315313 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:02.489926  315313 system_pods.go:86] 8 kube-system pods found
	I1206 09:11:02.489958  315313 system_pods.go:89] "coredns-66bc5c9577-54hvq" [f156a081-19f1-4a04-8234-24500867cf67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:02.489971  315313 system_pods.go:89] "etcd-default-k8s-diff-port-213278" [ba81dffa-f8a1-43a6-bda3-de5197a2764e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:11:02.489979  315313 system_pods.go:89] "kindnet-4jw2t" [1e817daf-c694-4ddf-8e08-85f504421f9b] Running
	I1206 09:11:02.490019  315313 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-213278" [00ae632e-2d8d-48fc-a219-a8411d843ff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:11:02.490032  315313 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-213278" [990ac86b-e97e-4874-94f3-88bc015c02bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:11:02.490041  315313 system_pods.go:89] "kube-proxy-86f62" [6d4cf5c2-5d6c-4c7d-b49f-1848af4f67cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:11:02.490052  315313 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-213278" [c5fc3b4d-b7fb-42ba-b275-28d2c56e9b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:11:02.490064  315313 system_pods.go:89] "storage-provisioner" [4e805b49-2e11-40c0-9ce9-eb5eed3e0c3b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:11:02.490077  315313 system_pods.go:126] duration metric: took 3.142256ms to wait for k8s-apps to be running ...
	I1206 09:11:02.490088  315313 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:11:02.490139  315313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:02.507849  315313 system_svc.go:56] duration metric: took 17.750336ms WaitForService to wait for kubelet
	I1206 09:11:02.507877  315313 kubeadm.go:587] duration metric: took 3.361605718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:02.507900  315313 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:11:02.510842  315313 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:11:02.510873  315313 node_conditions.go:123] node cpu capacity is 8
	I1206 09:11:02.510892  315313 node_conditions.go:105] duration metric: took 2.985295ms to run NodePressure ...
	I1206 09:11:02.510906  315313 start.go:242] waiting for startup goroutines ...
	I1206 09:11:02.510929  315313 start.go:247] waiting for cluster config update ...
	I1206 09:11:02.510943  315313 start.go:256] writing updated cluster config ...
	I1206 09:11:02.511286  315313 ssh_runner.go:195] Run: rm -f paused
	I1206 09:11:02.515867  315313 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:11:02.519770  315313 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-54hvq" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:11:04.526748  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:06.527128  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:06.164199  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:06.164241  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:06.164257  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:06.164265  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:06.164273  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:06.164291  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:06.164297  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:06.164304  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:06.164310  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:06.164317  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:06.164334  312610 retry.go:31] will retry after 787.981849ms: missing components: kube-dns
	I1206 09:11:06.960250  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:06.960289  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:06.960302  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:06.960311  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:06.960317  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:06.960324  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:06.960330  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:06.960337  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:06.960342  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:06.960347  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:06.960365  312610 retry.go:31] will retry after 1.055542155s: missing components: kube-dns
	I1206 09:11:08.020370  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:08.020409  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:08.020423  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:08.020433  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:08.020439  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:08.020446  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:08.020450  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:08.020456  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:08.020463  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:08.020467  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:08.020483  312610 retry.go:31] will retry after 1.081769528s: missing components: kube-dns
	I1206 09:11:09.111772  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:09.111813  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:09.111825  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:09.111835  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:09.111843  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:09.111851  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:09.111857  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:09.111862  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:09.111867  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:09.111873  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:09.111891  312610 retry.go:31] will retry after 1.327495758s: missing components: kube-dns
	I1206 09:11:10.444781  312610 system_pods.go:86] 9 kube-system pods found
	I1206 09:11:10.444821  312610 system_pods.go:89] "calico-kube-controllers-5c676f698c-h7xs9" [9caf755a-85c0-49bd-942f-b9625608be85] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 09:11:10.444852  312610 system_pods.go:89] "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 09:11:10.444865  312610 system_pods.go:89] "coredns-66bc5c9577-fkh66" [cece5500-3145-4ac0-a2eb-c1d66d064017] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:10.444876  312610 system_pods.go:89] "etcd-calico-646473" [4535314e-76ee-4e11-8bfe-324dde47def1] Running
	I1206 09:11:10.444885  312610 system_pods.go:89] "kube-apiserver-calico-646473" [7d8f339b-a4fd-42d3-847b-a9469a85585a] Running
	I1206 09:11:10.444894  312610 system_pods.go:89] "kube-controller-manager-calico-646473" [a436b12b-b1e0-4707-87ca-4e64e754b0d0] Running
	I1206 09:11:10.444903  312610 system_pods.go:89] "kube-proxy-tjf8c" [740cbf1a-8736-436f-8bbb-89ccad5359c8] Running
	I1206 09:11:10.444912  312610 system_pods.go:89] "kube-scheduler-calico-646473" [3b515686-bf11-4f9e-8e0f-125749836231] Running
	I1206 09:11:10.444918  312610 system_pods.go:89] "storage-provisioner" [66d7e86e-581e-4c8d-9d38-31112e0eb52f] Running
	I1206 09:11:10.444938  312610 retry.go:31] will retry after 2.037774599s: missing components: kube-dns
	W1206 09:11:08.613529  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:11.029825  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.497178801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.49736663Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/56087d36d071ab91820385145ce3ae749ddfad6d74f93dc0f783f143d6ef5c14/merged/etc/passwd: no such file or directory"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.49740937Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/56087d36d071ab91820385145ce3ae749ddfad6d74f93dc0f783f143d6ef5c14/merged/etc/group: no such file or directory"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.497665622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.523824812Z" level=info msg="Created container 18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da: kube-system/storage-provisioner/storage-provisioner" id=e5c60e86-bbcc-4e74-952e-eb35d0536cd0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.524478844Z" level=info msg="Starting container: 18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da" id=980e4cf6-6a98-4089-b663-54c800867ca1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:10:47 embed-certs-931091 crio[567]: time="2025-12-06T09:10:47.526526045Z" level=info msg="Started container" PID=1721 containerID=18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da description=kube-system/storage-provisioner/storage-provisioner id=980e4cf6-6a98-4089-b663-54c800867ca1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c58bfbe5b25978dfec19c32b60558915afdac2dacc2667d1fa145764f00ba4e1
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.080344875Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.085705921Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.085741991Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.085762315Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.090451021Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.090554166Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.090582033Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.095105456Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.095133759Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.095158692Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.100180736Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.100213059Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.100233052Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.104442828Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.104468082Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.104491779Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.109265602Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 09:10:57 embed-certs-931091 crio[567]: time="2025-12-06T09:10:57.109294906Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	18df3e3592460       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   c58bfbe5b2597       storage-provisioner                          kube-system
	bb28e56c23678       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   65a2c3bbf703d       dashboard-metrics-scraper-6ffb444bf9-jhnrz   kubernetes-dashboard
	682529937bb65       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   5c13db0ef09c1       kubernetes-dashboard-855c9754f9-68gdp        kubernetes-dashboard
	e1cb1e6a344a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   f7a062009d92d       coredns-66bc5c9577-x87kt                     kube-system
	4a4d1fca96529       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   04192ee4b5268       busybox                                      default
	b82c97edbdf4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   c58bfbe5b2597       storage-provisioner                          kube-system
	37108c9bddfdb       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           57 seconds ago       Running             kube-proxy                  0                   a0d1a3a287672       kube-proxy-9hp5d                             kube-system
	edd89974c1046       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   53a19c0f92bec       kindnet-kzpz2                                kube-system
	a846117bc72b7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   db2ab96611679       kube-scheduler-embed-certs-931091            kube-system
	04174e56b26bf       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   d664999ead8e2       kube-apiserver-embed-certs-931091            kube-system
	9a3dc4e5add4a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   869d241ba4325       etcd-embed-certs-931091                      kube-system
	893b7522c648e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   4f338e793f1c3       kube-controller-manager-embed-certs-931091   kube-system
	
	
	==> coredns [e1cb1e6a344a1ed0d926d7ad94b48af2f3c736de156bc566f54758c87a09ee4e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49021 - 63698 "HINFO IN 4375437974956359104.602222676637547894. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05027852s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-931091
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-931091
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=embed-certs-931091
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-931091
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:11:06 +0000   Sat, 06 Dec 2025 09:09:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-931091
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ca3719f5-d0e6-4020-bdb6-8b9c5b73b4fa
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-x87kt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-931091                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-kzpz2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-931091             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-931091    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-9hp5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-931091             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jhnrz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-68gdp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-931091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-931091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-931091 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-931091 event: Registered Node embed-certs-931091 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-931091 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-931091 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-931091 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-931091 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node embed-certs-931091 event: Registered Node embed-certs-931091 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [9a3dc4e5add4a40d23fc1d867a32c27494d2f0aa5fe72049c03da86c84d3090b] <==
	{"level":"warn","ts":"2025-12-06T09:10:14.734311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.744150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.754601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.763447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.773566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.782382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.793047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.802309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.810082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.818709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.828874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.838294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.847232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.856154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.864554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.874540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.894447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.901798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.911090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.921106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.929025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.944123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.953643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:14.962016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:10:15.028273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55578","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:14 up 53 min,  0 user,  load average: 4.29, 3.21, 2.13
	Linux embed-certs-931091 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [edd89974c1046589be1d988771842ab006817d9cb74b7aa914e30d9c1988d400] <==
	I1206 09:10:16.873912       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:10:16.874212       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1206 09:10:16.874394       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:10:16.874412       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:10:16.874439       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:10:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:10:17.079114       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:10:17.079167       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:10:17.079189       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:10:17.079390       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 09:10:47.080385       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 09:10:47.080385       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 09:10:47.080418       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 09:10:47.080438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 09:10:48.579865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:10:48.579896       1 metrics.go:72] Registering metrics
	I1206 09:10:48.580020       1 controller.go:711] "Syncing nftables rules"
	I1206 09:10:57.079967       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:10:57.080059       1 main.go:301] handling current node
	I1206 09:11:07.079674       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1206 09:11:07.079716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04174e56b26bf5e8534176ff57e230be3ed770891a615c3b75077b0468d06685] <==
	I1206 09:10:15.548737       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:10:15.548893       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 09:10:15.548927       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:10:15.549073       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:10:15.549084       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:10:15.549088       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:10:15.549093       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:10:15.555354       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:10:15.555682       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:10:15.563585       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:10:15.563709       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:10:15.567147       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:10:15.567177       1 policy_source.go:240] refreshing policies
	I1206 09:10:15.595781       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:10:15.850786       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:10:15.879510       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:10:15.900716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:10:15.907300       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:10:15.913832       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:10:15.962111       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.135.112"}
	I1206 09:10:15.973939       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.41.101"}
	I1206 09:10:16.453319       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:10:19.080358       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:10:19.377832       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:10:19.527767       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [893b7522c648e625ee7cedf9142d4b1472b197d9b456f9a6939ff5eafca0b904] <==
	I1206 09:10:18.924669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:10:18.924689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:10:18.924711       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:10:18.924720       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:10:18.924746       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:10:18.924755       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:10:18.924768       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:10:18.924878       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:10:18.925137       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:10:18.925233       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:10:18.926327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:10:18.926355       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:10:18.930798       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:10:18.930805       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 09:10:18.932010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:10:18.936148       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:10:18.939454       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:10:18.941719       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:10:18.942898       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:10:18.945226       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:10:18.950606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:10:18.950620       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:10:18.950629       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:10:18.950657       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:10:18.954019       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [37108c9bddfdb2c8b274f5250ba39d648e2efb7d93b47c46241aea6a5696a5cf] <==
	I1206 09:10:16.771632       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:10:16.846822       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:10:16.947430       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:10:16.947483       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1206 09:10:16.947605       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:10:16.966776       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:10:16.966858       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:10:16.973570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:10:16.974023       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:10:16.974066       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:10:16.975803       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:10:16.976961       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:10:16.976408       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:10:16.977049       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:10:16.976949       1 config.go:200] "Starting service config controller"
	I1206 09:10:16.977062       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:10:16.977576       1 config.go:309] "Starting node config controller"
	I1206 09:10:16.977597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:10:16.977604       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:10:17.077216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:10:17.077232       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:10:17.077250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a846117bc72b733c4769ce32f31c23b76ae89d79e9fd9cf10be97e49bc2b4a74] <==
	I1206 09:10:14.840155       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:10:15.470970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:10:15.471130       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:10:15.471155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:10:15.471166       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:10:15.512294       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:10:15.512713       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:10:15.518034       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:10:15.518080       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:10:15.519493       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:10:15.519725       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:10:15.619255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:10:19 embed-certs-931091 kubelet[732]: I1206 09:10:19.529232     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r544n\" (UniqueName: \"kubernetes.io/projected/54fbcd33-4737-4881-ab3e-5359f143b463-kube-api-access-r544n\") pod \"dashboard-metrics-scraper-6ffb444bf9-jhnrz\" (UID: \"54fbcd33-4737-4881-ab3e-5359f143b463\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz"
	Dec 06 09:10:23 embed-certs-931091 kubelet[732]: I1206 09:10:23.418680     732 scope.go:117] "RemoveContainer" containerID="946458d74e645fc5b3f9560ba0e099e512a25bc1754b686164fcf3f981740746"
	Dec 06 09:10:24 embed-certs-931091 kubelet[732]: I1206 09:10:24.423423     732 scope.go:117] "RemoveContainer" containerID="946458d74e645fc5b3f9560ba0e099e512a25bc1754b686164fcf3f981740746"
	Dec 06 09:10:24 embed-certs-931091 kubelet[732]: I1206 09:10:24.423614     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:24 embed-certs-931091 kubelet[732]: E1206 09:10:24.423825     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:25 embed-certs-931091 kubelet[732]: I1206 09:10:25.133868     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 06 09:10:25 embed-certs-931091 kubelet[732]: I1206 09:10:25.427640     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:25 embed-certs-931091 kubelet[732]: E1206 09:10:25.427844     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:26 embed-certs-931091 kubelet[732]: I1206 09:10:26.441807     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-68gdp" podStartSLOduration=1.138686453 podStartE2EDuration="7.441786727s" podCreationTimestamp="2025-12-06 09:10:19 +0000 UTC" firstStartedPulling="2025-12-06 09:10:19.777100277 +0000 UTC m=+6.509616277" lastFinishedPulling="2025-12-06 09:10:26.080200542 +0000 UTC m=+12.812716551" observedRunningTime="2025-12-06 09:10:26.44173749 +0000 UTC m=+13.174253505" watchObservedRunningTime="2025-12-06 09:10:26.441786727 +0000 UTC m=+13.174302739"
	Dec 06 09:10:31 embed-certs-931091 kubelet[732]: I1206 09:10:31.148805     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:31 embed-certs-931091 kubelet[732]: E1206 09:10:31.149020     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: I1206 09:10:43.359117     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: I1206 09:10:43.475591     732 scope.go:117] "RemoveContainer" containerID="577a063f5059497b39cf3b34aa0494e9b5931d996ef8501127039ac40851d1f0"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: I1206 09:10:43.475868     732 scope.go:117] "RemoveContainer" containerID="bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	Dec 06 09:10:43 embed-certs-931091 kubelet[732]: E1206 09:10:43.476156     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:10:47 embed-certs-931091 kubelet[732]: I1206 09:10:47.489324     732 scope.go:117] "RemoveContainer" containerID="b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c"
	Dec 06 09:10:51 embed-certs-931091 kubelet[732]: I1206 09:10:51.148287     732 scope.go:117] "RemoveContainer" containerID="bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	Dec 06 09:10:51 embed-certs-931091 kubelet[732]: E1206 09:10:51.148542     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:11:02 embed-certs-931091 kubelet[732]: I1206 09:11:02.358557     732 scope.go:117] "RemoveContainer" containerID="bb28e56c236788efe1a069138e54e21d5360b795aab10ade3cdc683c428bde46"
	Dec 06 09:11:02 embed-certs-931091 kubelet[732]: E1206 09:11:02.358810     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jhnrz_kubernetes-dashboard(54fbcd33-4737-4881-ab3e-5359f143b463)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jhnrz" podUID="54fbcd33-4737-4881-ab3e-5359f143b463"
	Dec 06 09:11:09 embed-certs-931091 kubelet[732]: I1206 09:11:09.496195     732 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:11:09 embed-certs-931091 systemd[1]: kubelet.service: Consumed 1.814s CPU time.
	
	
	==> kubernetes-dashboard [682529937bb653f6ae7d2415238d63ec894db888c269bbed09b7929099eb766b] <==
	2025/12/06 09:10:26 Starting overwatch
	2025/12/06 09:10:26 Using namespace: kubernetes-dashboard
	2025/12/06 09:10:26 Using in-cluster config to connect to apiserver
	2025/12/06 09:10:26 Using secret token for csrf signing
	2025/12/06 09:10:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:10:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:10:26 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:10:26 Generating JWE encryption key
	2025/12/06 09:10:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:10:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:10:26 Initializing JWE encryption key from synchronized object
	2025/12/06 09:10:26 Creating in-cluster Sidecar client
	2025/12/06 09:10:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:10:26 Serving insecurely on HTTP port: 9090
	2025/12/06 09:10:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [18df3e3592460610b22c280a5267da94f905448d85da2d4a7e6f4641145b95da] <==
	I1206 09:10:47.546770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:10:47.546828       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:10:47.549023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:51.004607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:55.265956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:58.865199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:01.919275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:04.941692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:04.946627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:04.946792       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:11:04.946934       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-931091_643d66e7-b329-4d97-b79b-b53f61108e9e!
	I1206 09:11:04.946934       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45ccfad5-6c96-43b7-8f37-e4ba5bb38e67", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-931091_643d66e7-b329-4d97-b79b-b53f61108e9e became leader
	W1206 09:11:04.949469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:04.952821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:05.047954       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-931091_643d66e7-b329-4d97-b79b-b53f61108e9e!
	W1206 09:11:06.956113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:06.964115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:08.968809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:09.017152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:11.022770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:11.029265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:13.034327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:13.045908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:15.049872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:15.054305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b82c97edbdf4fe03412ca2f96bcd004be9526498d0c3112aa46de52d9a2f0c3c] <==
	I1206 09:10:16.740072       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:10:46.743688       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931091 -n embed-certs-931091
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931091 -n embed-certs-931091: exit status 2 (353.058701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-931091 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-213278 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-213278 --alsologtostderr -v=1: exit status 80 (1.964239178s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-213278 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:11:49.248591  333319 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:49.248698  333319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:49.248708  333319 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:49.248712  333319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:49.248898  333319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:11:49.249163  333319 out.go:368] Setting JSON to false
	I1206 09:11:49.249182  333319 mustload.go:66] Loading cluster: default-k8s-diff-port-213278
	I1206 09:11:49.249511  333319 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:49.249879  333319 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213278 --format={{.State.Status}}
	I1206 09:11:49.272227  333319 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:11:49.272488  333319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:49.361273  333319 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-06 09:11:49.340143449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:49.362497  333319 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-213278 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 09:11:49.364523  333319 out.go:179] * Pausing node default-k8s-diff-port-213278 ... 
	I1206 09:11:49.366205  333319 host.go:66] Checking if "default-k8s-diff-port-213278" exists ...
	I1206 09:11:49.366534  333319 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:49.366575  333319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213278
	I1206 09:11:49.400209  333319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/default-k8s-diff-port-213278/id_rsa Username:docker}
	I1206 09:11:49.509897  333319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:49.528198  333319 pause.go:52] kubelet running: true
	I1206 09:11:49.528351  333319 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:11:49.751345  333319 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:11:49.751452  333319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:11:49.834246  333319 cri.go:89] found id: "05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8"
	I1206 09:11:49.834274  333319 cri.go:89] found id: "deb19868994976b1519511ddc4ae28885b0e5e36a5be9d305b98fc87796e836e"
	I1206 09:11:49.834280  333319 cri.go:89] found id: "90f6f1c662ee4be789481c3d36c939de768e2a68031835acafba34c8bd8c2c0a"
	I1206 09:11:49.834285  333319 cri.go:89] found id: "79f8e846255f85ab83dd33f39644030d86c3a149164871b704e48bf6ca0888b1"
	I1206 09:11:49.834290  333319 cri.go:89] found id: "cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf"
	I1206 09:11:49.834296  333319 cri.go:89] found id: "993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc"
	I1206 09:11:49.834300  333319 cri.go:89] found id: "a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756"
	I1206 09:11:49.834304  333319 cri.go:89] found id: "8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7"
	I1206 09:11:49.834309  333319 cri.go:89] found id: "877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13"
	I1206 09:11:49.834319  333319 cri.go:89] found id: "1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c"
	I1206 09:11:49.834324  333319 cri.go:89] found id: "51194e071a8c47771f617e61bfe1e35cfe1b6d522ef2161e639970de26ba9592"
	I1206 09:11:49.834328  333319 cri.go:89] found id: ""
	I1206 09:11:49.834383  333319 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:11:49.850419  333319 retry.go:31] will retry after 251.102715ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:49Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:11:50.101776  333319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:50.116245  333319 pause.go:52] kubelet running: false
	I1206 09:11:50.116296  333319 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:11:50.301464  333319 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:11:50.301628  333319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:11:50.384055  333319 cri.go:89] found id: "05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8"
	I1206 09:11:50.384079  333319 cri.go:89] found id: "deb19868994976b1519511ddc4ae28885b0e5e36a5be9d305b98fc87796e836e"
	I1206 09:11:50.384086  333319 cri.go:89] found id: "90f6f1c662ee4be789481c3d36c939de768e2a68031835acafba34c8bd8c2c0a"
	I1206 09:11:50.384091  333319 cri.go:89] found id: "79f8e846255f85ab83dd33f39644030d86c3a149164871b704e48bf6ca0888b1"
	I1206 09:11:50.384096  333319 cri.go:89] found id: "cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf"
	I1206 09:11:50.384101  333319 cri.go:89] found id: "993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc"
	I1206 09:11:50.384106  333319 cri.go:89] found id: "a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756"
	I1206 09:11:50.384110  333319 cri.go:89] found id: "8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7"
	I1206 09:11:50.384114  333319 cri.go:89] found id: "877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13"
	I1206 09:11:50.384122  333319 cri.go:89] found id: "1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c"
	I1206 09:11:50.384130  333319 cri.go:89] found id: "51194e071a8c47771f617e61bfe1e35cfe1b6d522ef2161e639970de26ba9592"
	I1206 09:11:50.384135  333319 cri.go:89] found id: ""
	I1206 09:11:50.384203  333319 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:11:50.398008  333319 retry.go:31] will retry after 429.331992ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:50Z" level=error msg="open /run/runc: no such file or directory"
	I1206 09:11:50.827641  333319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:50.847901  333319 pause.go:52] kubelet running: false
	I1206 09:11:50.847960  333319 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 09:11:51.034109  333319 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 09:11:51.034181  333319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 09:11:51.112092  333319 cri.go:89] found id: "05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8"
	I1206 09:11:51.112122  333319 cri.go:89] found id: "deb19868994976b1519511ddc4ae28885b0e5e36a5be9d305b98fc87796e836e"
	I1206 09:11:51.112129  333319 cri.go:89] found id: "90f6f1c662ee4be789481c3d36c939de768e2a68031835acafba34c8bd8c2c0a"
	I1206 09:11:51.112135  333319 cri.go:89] found id: "79f8e846255f85ab83dd33f39644030d86c3a149164871b704e48bf6ca0888b1"
	I1206 09:11:51.112149  333319 cri.go:89] found id: "cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf"
	I1206 09:11:51.112155  333319 cri.go:89] found id: "993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc"
	I1206 09:11:51.112160  333319 cri.go:89] found id: "a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756"
	I1206 09:11:51.112165  333319 cri.go:89] found id: "8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7"
	I1206 09:11:51.112169  333319 cri.go:89] found id: "877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13"
	I1206 09:11:51.112183  333319 cri.go:89] found id: "1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c"
	I1206 09:11:51.112192  333319 cri.go:89] found id: "51194e071a8c47771f617e61bfe1e35cfe1b6d522ef2161e639970de26ba9592"
	I1206 09:11:51.112196  333319 cri.go:89] found id: ""
	I1206 09:11:51.112243  333319 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 09:11:51.129395  333319 out.go:203] 
	W1206 09:11:51.130785  333319 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 09:11:51.130807  333319 out.go:285] * 
	* 
	W1206 09:11:51.134840  333319 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:11:51.137748  333319 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-213278 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-213278
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-213278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf",
	        "Created": "2025-12-06T09:09:12.980409254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:10:51.926966533Z",
	            "FinishedAt": "2025-12-06T09:10:50.976833337Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/hosts",
	        "LogPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf-json.log",
	        "Name": "/default-k8s-diff-port-213278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-213278:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-213278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf",
	                "LowerDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-213278",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-213278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-213278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-213278",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bb7488f4d5f446be586ffec379aec4de46a8f9b8710623a08111f3a219863f51",
	            "SandboxKey": "/var/run/docker/netns/bb7488f4d5f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-213278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57bdd7b529719bb4288cd247e9e4bc85dc55500f3378aa22459233ae5de1bd98",
	                    "EndpointID": "e474e9bad674a9b737e3b49d12b170ef83618be979753e5e606306b7c222d4ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "5a:3d:98:b9:d1:96",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-213278",
	                        "7ed3f206e5bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278: exit status 2 (373.528084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-213278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-213278 logs -n 25: (1.487915579s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p kindnet-646473                                                                                                                                               │ kindnet-646473               │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ start   │ -p enable-default-cni-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-646473    │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 pgrep -a kubelet                                                                                                                               │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/nsswitch.conf                                                                                                                    │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/hosts                                                                                                                            │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/resolv.conf                                                                                                                      │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo crictl pods                                                                                                                               │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo crictl ps --all                                                                                                                           │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                    │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo ip a s                                                                                                                                    │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo ip r s                                                                                                                                    │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo iptables-save                                                                                                                             │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo iptables -t nat -L -n -v                                                                                                                  │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl status kubelet --all --full --no-pager                                                                                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl cat kubelet --no-pager                                                                                                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                           │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/kubernetes/kubelet.conf                                                                                                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ pause   │ -p default-k8s-diff-port-213278 --alsologtostderr -v=1                                                                                                          │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo cat /var/lib/kubelet/config.yaml                                                                                                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl status docker --all --full --no-pager                                                                                           │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl cat docker --no-pager                                                                                                           │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/docker/daemon.json                                                                                                               │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo docker system info                                                                                                                        │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl status cri-docker --all --full --no-pager                                                                                       │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl cat cri-docker --no-pager                                                                                                       │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:11:25
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:11:25.008640  326267 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:25.008746  326267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:25.008755  326267 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:25.008759  326267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:25.008935  326267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:11:25.009419  326267 out.go:368] Setting JSON to false
	I1206 09:11:25.010639  326267 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3236,"bootTime":1765009049,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:11:25.010721  326267 start.go:143] virtualization: kvm guest
	I1206 09:11:25.012605  326267 out.go:179] * [enable-default-cni-646473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:11:25.014018  326267 notify.go:221] Checking for updates...
	I1206 09:11:25.014099  326267 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:11:25.015544  326267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:11:25.017040  326267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:11:25.018204  326267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:11:25.019363  326267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:11:25.021410  326267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:11:25.023475  326267 config.go:182] Loaded profile config "calico-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:25.023627  326267 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:25.023762  326267 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:25.023937  326267 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:11:25.053642  326267 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:11:25.053774  326267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:25.122354  326267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-12-06 09:11:25.110265581 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:25.122461  326267 docker.go:319] overlay module found
	I1206 09:11:25.124235  326267 out.go:179] * Using the docker driver based on user configuration
	I1206 09:11:25.125584  326267 start.go:309] selected driver: docker
	I1206 09:11:25.125596  326267 start.go:927] validating driver "docker" against <nil>
	I1206 09:11:25.125607  326267 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:11:25.126235  326267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:25.196641  326267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-06 09:11:25.186082497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:25.196870  326267 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1206 09:11:25.197177  326267 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1206 09:11:25.197248  326267 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:25.199795  326267 out.go:179] * Using Docker driver with root privileges
	I1206 09:11:25.201040  326267 cni.go:84] Creating CNI manager for "bridge"
	I1206 09:11:25.201064  326267 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:11:25.201158  326267 start.go:353] cluster config:
	{Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:25.202635  326267 out.go:179] * Starting "enable-default-cni-646473" primary control-plane node in "enable-default-cni-646473" cluster
	I1206 09:11:25.203870  326267 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:11:25.205060  326267 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:11:25.206204  326267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:25.206251  326267 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:11:25.206271  326267 cache.go:65] Caching tarball of preloaded images
	I1206 09:11:25.206302  326267 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:11:25.206375  326267 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:11:25.206389  326267 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:11:25.206503  326267 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/config.json ...
	I1206 09:11:25.206530  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/config.json: {Name:mk9b5b4044be3ee07f39ad55a326506414bd4e8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:25.230471  326267 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:11:25.230502  326267 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:11:25.230522  326267 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:11:25.230556  326267 start.go:360] acquireMachinesLock for enable-default-cni-646473: {Name:mk4c0a92bdf98edc18817404e4286b7b9a47295b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:11:25.230671  326267 start.go:364] duration metric: took 93.874µs to acquireMachinesLock for "enable-default-cni-646473"
	I1206 09:11:25.230701  326267 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:11:25.230778  326267 start.go:125] createHost starting for "" (driver="docker")
	W1206 09:11:23.525678  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:26.025807  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:25.018957  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Running}}
	I1206 09:11:25.040502  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:25.062349  325034 cli_runner.go:164] Run: docker exec custom-flannel-646473 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:11:25.112595  325034 oci.go:144] the created container "custom-flannel-646473" has a running status.
	I1206 09:11:25.112631  325034 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa...
	I1206 09:11:25.305604  325034 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:11:25.344039  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:25.370690  325034 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:11:25.370710  325034 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-646473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:11:25.419353  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:25.440910  325034 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:25.441012  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:25.464597  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:25.464939  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:25.464963  325034 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:25.606135  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-646473
	
	I1206 09:11:25.606170  325034 ubuntu.go:182] provisioning hostname "custom-flannel-646473"
	I1206 09:11:25.606236  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:25.629601  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:25.629943  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:25.629971  325034 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-646473 && echo "custom-flannel-646473" | sudo tee /etc/hostname
	I1206 09:11:25.802369  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-646473
	
	I1206 09:11:25.802452  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:25.823751  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:25.824082  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:25.824114  325034 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-646473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-646473/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-646473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:25.957021  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:25.957056  325034 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:11:25.957079  325034 ubuntu.go:190] setting up certificates
	I1206 09:11:25.957091  325034 provision.go:84] configureAuth start
	I1206 09:11:25.957163  325034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-646473
	I1206 09:11:25.979268  325034 provision.go:143] copyHostCerts
	I1206 09:11:25.979337  325034 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:11:25.979350  325034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:11:25.979434  325034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:11:25.979556  325034 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:11:25.979569  325034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:11:25.979608  325034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:11:25.979700  325034 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:11:25.979714  325034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:11:25.979755  325034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:11:25.979847  325034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-646473 san=[127.0.0.1 192.168.103.2 custom-flannel-646473 localhost minikube]
	I1206 09:11:26.045548  325034 provision.go:177] copyRemoteCerts
	I1206 09:11:26.045600  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:26.045632  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.067303  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.161560  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:26.184241  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 09:11:26.203126  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:11:26.223076  325034 provision.go:87] duration metric: took 265.970299ms to configureAuth
	I1206 09:11:26.223109  325034 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:11:26.223318  325034 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:26.223448  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.241479  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:26.241731  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:26.241752  325034 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:11:26.527104  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:11:26.527131  325034 machine.go:97] duration metric: took 1.086200025s to provisionDockerMachine
	I1206 09:11:26.527143  325034 client.go:176] duration metric: took 6.56265812s to LocalClient.Create
	I1206 09:11:26.527165  325034 start.go:167] duration metric: took 6.56272307s to libmachine.API.Create "custom-flannel-646473"
	I1206 09:11:26.527174  325034 start.go:293] postStartSetup for "custom-flannel-646473" (driver="docker")
	I1206 09:11:26.527185  325034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:26.527242  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:26.527279  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.554840  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.658207  325034 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:26.663060  325034 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:11:26.663092  325034 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:11:26.663105  325034 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:11:26.663158  325034 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:11:26.663257  325034 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:11:26.663370  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:11:26.671168  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:26.691872  325034 start.go:296] duration metric: took 164.684398ms for postStartSetup
	I1206 09:11:26.692778  325034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-646473
	I1206 09:11:26.712702  325034 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/config.json ...
	I1206 09:11:26.713015  325034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:11:26.713069  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.734851  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.831647  325034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:11:26.836670  325034 start.go:128] duration metric: took 6.876684754s to createHost
	I1206 09:11:26.836698  325034 start.go:83] releasing machines lock for "custom-flannel-646473", held for 6.87681625s
	I1206 09:11:26.836771  325034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-646473
	I1206 09:11:26.864696  325034 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:26.864761  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.864892  325034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:26.864981  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.888912  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.889252  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:27.062274  325034 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:27.070224  325034 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:11:27.105591  325034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:27.110464  325034 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:27.110524  325034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:27.140131  325034 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:27.140160  325034 start.go:496] detecting cgroup driver to use...
	I1206 09:11:27.140195  325034 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:11:27.140245  325034 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:11:27.157192  325034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:11:27.170232  325034 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:27.170291  325034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:27.190081  325034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:27.211706  325034 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:27.316613  325034 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:27.426446  325034 docker.go:234] disabling docker service ...
	I1206 09:11:27.426511  325034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:27.448537  325034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:27.462027  325034 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:27.563923  325034 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:27.700610  325034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:27.720871  325034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:27.745451  325034 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:11:27.745515  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.762432  325034 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:11:27.762508  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.777810  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.792236  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.806841  325034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:27.821669  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.834272  325034 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.858976  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.872035  325034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:27.882857  325034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:27.893722  325034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:28.024117  325034 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:11:25.232752  326267 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:11:25.233042  326267 start.go:159] libmachine.API.Create for "enable-default-cni-646473" (driver="docker")
	I1206 09:11:25.233078  326267 client.go:173] LocalClient.Create starting
	I1206 09:11:25.233194  326267 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:11:25.233238  326267 main.go:143] libmachine: Decoding PEM data...
	I1206 09:11:25.233268  326267 main.go:143] libmachine: Parsing certificate...
	I1206 09:11:25.233337  326267 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:11:25.233368  326267 main.go:143] libmachine: Decoding PEM data...
	I1206 09:11:25.233389  326267 main.go:143] libmachine: Parsing certificate...
	I1206 09:11:25.233737  326267 cli_runner.go:164] Run: docker network inspect enable-default-cni-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:11:25.255391  326267 cli_runner.go:211] docker network inspect enable-default-cni-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:11:25.255475  326267 network_create.go:284] running [docker network inspect enable-default-cni-646473] to gather additional debugging logs...
	I1206 09:11:25.255496  326267 cli_runner.go:164] Run: docker network inspect enable-default-cni-646473
	W1206 09:11:25.276617  326267 cli_runner.go:211] docker network inspect enable-default-cni-646473 returned with exit code 1
	I1206 09:11:25.276651  326267 network_create.go:287] error running [docker network inspect enable-default-cni-646473]: docker network inspect enable-default-cni-646473: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-646473 not found
	I1206 09:11:25.276674  326267 network_create.go:289] output of [docker network inspect enable-default-cni-646473]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-646473 not found
	
	** /stderr **
	I1206 09:11:25.276792  326267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:11:25.297402  326267 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:11:25.298240  326267 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:11:25.299119  326267 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:11:25.299707  326267 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-80080615a73e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:81:2b:23:3c:10} reservation:<nil>}
	I1206 09:11:25.300206  326267 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-57bdd7b52971 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:54:0b:60:1c:a3} reservation:<nil>}
	I1206 09:11:25.301119  326267 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fa0090}
	I1206 09:11:25.301146  326267 network_create.go:124] attempt to create docker network enable-default-cni-646473 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:11:25.301202  326267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-646473 enable-default-cni-646473
	I1206 09:11:25.373826  326267 network_create.go:108] docker network enable-default-cni-646473 192.168.94.0/24 created
	I1206 09:11:25.373858  326267 kic.go:121] calculated static IP "192.168.94.2" for the "enable-default-cni-646473" container
	I1206 09:11:25.373940  326267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:11:25.399108  326267 cli_runner.go:164] Run: docker volume create enable-default-cni-646473 --label name.minikube.sigs.k8s.io=enable-default-cni-646473 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:11:25.420445  326267 oci.go:103] Successfully created a docker volume enable-default-cni-646473
	I1206 09:11:25.420513  326267 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-646473-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646473 --entrypoint /usr/bin/test -v enable-default-cni-646473:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:11:25.847809  326267 oci.go:107] Successfully prepared a docker volume enable-default-cni-646473
	I1206 09:11:25.847890  326267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:25.847906  326267 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:11:25.847977  326267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:11:30.487929  325034 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.463768399s)
	I1206 09:11:30.487959  325034 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:11:30.488022  325034 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:11:30.493228  325034 start.go:564] Will wait 60s for crictl version
	I1206 09:11:30.493314  325034 ssh_runner.go:195] Run: which crictl
	I1206 09:11:30.498080  325034 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:11:30.531493  325034 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:11:30.531616  325034 ssh_runner.go:195] Run: crio --version
	I1206 09:11:30.571569  325034 ssh_runner.go:195] Run: crio --version
	I1206 09:11:30.606846  325034 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:11:28.028632  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:30.526635  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:30.608198  325034 cli_runner.go:164] Run: docker network inspect custom-flannel-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:11:30.632841  325034 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:30.637666  325034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:30.651295  325034 kubeadm.go:884] updating cluster {Name:custom-flannel-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-646473 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:30.651444  325034 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:30.651495  325034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:30.687814  325034 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:30.687837  325034 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:11:30.687883  325034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:30.721340  325034 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:30.721370  325034 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:30.721379  325034 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1206 09:11:30.721481  325034 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-646473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1206 09:11:30.721562  325034 ssh_runner.go:195] Run: crio config
	I1206 09:11:30.771509  325034 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 09:11:30.771553  325034 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:30.771582  325034 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-646473 NodeName:custom-flannel-646473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:30.771734  325034 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-646473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:30.771799  325034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:30.781522  325034 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:30.781593  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:30.791062  325034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1206 09:11:30.806150  325034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:30.824380  325034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:11:30.839884  325034 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:30.844302  325034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:30.856629  325034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:30.966687  325034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:30.996223  325034 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473 for IP: 192.168.103.2
	I1206 09:11:30.996245  325034 certs.go:195] generating shared ca certs ...
	I1206 09:11:30.996264  325034 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:30.996406  325034 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:11:30.996468  325034 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:11:30.996483  325034 certs.go:257] generating profile certs ...
	I1206 09:11:30.996558  325034 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.key
	I1206 09:11:30.996581  325034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.crt with IP's: []
	I1206 09:11:31.062308  325034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.crt ...
	I1206 09:11:31.062348  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.crt: {Name:mk55cf7b46b8dd8b3cbb3fa67bb95f8617961c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.062559  325034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.key ...
	I1206 09:11:31.062589  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.key: {Name:mk36301b53f85125c72f5348e5024dc93f0e8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.062720  325034 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3
	I1206 09:11:31.063330  325034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1206 09:11:31.185723  325034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3 ...
	I1206 09:11:31.185751  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3: {Name:mk7d08ff49cf9988bb032237e2b85c5e65744033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.191211  325034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3 ...
	I1206 09:11:31.191246  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3: {Name:mke3f76c716b252fbc00c3240ea8229049d5e6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.191400  325034 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt
	I1206 09:11:31.191512  325034 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key
	I1206 09:11:31.192257  325034 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key
	I1206 09:11:31.192328  325034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt with IP's: []
	I1206 09:11:31.334066  325034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt ...
	I1206 09:11:31.334101  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt: {Name:mk73c3e5699aef5246dce8b7ed48af73e80ff91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.334298  325034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key ...
	I1206 09:11:31.334326  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key: {Name:mkf059d3900cb7c6291e39f777402ea0ddb2f547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.334591  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:11:31.334641  325034 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:11:31.334652  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:11:31.334676  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:31.334706  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:31.334741  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:31.334802  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:31.335550  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:31.354565  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:31.372666  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:31.392188  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:11:31.411726  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1206 09:11:31.430108  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:11:31.447713  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:31.466159  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:11:31.485056  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:31.505938  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:11:31.526076  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:11:31.545459  325034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:31.559506  325034 ssh_runner.go:195] Run: openssl version
	I1206 09:11:31.565625  325034 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.573351  325034 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:31.581306  325034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.585066  325034 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.585123  325034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.620499  325034 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:31.628625  325034 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:31.636181  325034 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.643812  325034 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:11:31.651576  325034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.655545  325034 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.655603  325034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.691185  325034 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:11:31.699696  325034 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:11:31.707937  325034 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.716081  325034 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:11:31.724705  325034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.728932  325034 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.729021  325034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.764419  325034 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:31.772362  325034 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:31.780173  325034 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:31.783943  325034 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:31.784025  325034 kubeadm.go:401] StartCluster: {Name:custom-flannel-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:31.784142  325034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:31.784185  325034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:31.811269  325034 cri.go:89] found id: ""
	I1206 09:11:31.811334  325034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:31.820214  325034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:31.828604  325034 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:11:31.828659  325034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:31.836742  325034 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:31.836758  325034 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:31.836807  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:31.845880  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:31.845928  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:31.854041  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:31.861825  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:31.861877  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:31.869571  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:31.877222  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:31.877277  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:31.884622  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:31.893138  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:31.893207  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:31.900815  325034 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:11:31.961861  325034 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:11:32.021909  325034 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:30.348389  326267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.500319343s)
	I1206 09:11:30.348423  326267 kic.go:203] duration metric: took 4.500513378s to extract preloaded images to volume ...
	W1206 09:11:30.348522  326267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:11:30.348566  326267 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:11:30.348615  326267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:11:30.426556  326267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-646473 --name enable-default-cni-646473 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646473 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-646473 --network enable-default-cni-646473 --ip 192.168.94.2 --volume enable-default-cni-646473:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:11:30.786861  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Running}}
	I1206 09:11:30.807168  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:30.827821  326267 cli_runner.go:164] Run: docker exec enable-default-cni-646473 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:11:30.877845  326267 oci.go:144] the created container "enable-default-cni-646473" has a running status.
	I1206 09:11:30.877877  326267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa...
	I1206 09:11:30.973283  326267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:11:31.007285  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:31.033714  326267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:11:31.033731  326267 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-646473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:11:31.089079  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:31.108836  326267 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:31.108934  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:31.128967  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:31.129326  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:31.129342  326267 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:31.130141  326267 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55300->127.0.0.1:33128: read: connection reset by peer
	I1206 09:11:34.261175  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646473
	
	I1206 09:11:34.261208  326267 ubuntu.go:182] provisioning hostname "enable-default-cni-646473"
	I1206 09:11:34.261270  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.280624  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:34.280826  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:34.280842  326267 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-646473 && echo "enable-default-cni-646473" | sudo tee /etc/hostname
	I1206 09:11:34.420206  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646473
	
	I1206 09:11:34.420284  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.440172  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:34.440396  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:34.440412  326267 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-646473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-646473/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-646473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:34.572498  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:34.572535  326267 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:11:34.572555  326267 ubuntu.go:190] setting up certificates
	I1206 09:11:34.572566  326267 provision.go:84] configureAuth start
	I1206 09:11:34.572621  326267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646473
	I1206 09:11:34.594635  326267 provision.go:143] copyHostCerts
	I1206 09:11:34.594705  326267 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:11:34.594716  326267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:11:34.594810  326267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:11:34.594913  326267 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:11:34.594924  326267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:11:34.594960  326267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:11:34.595086  326267 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:11:34.595099  326267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:11:34.595132  326267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:11:34.595185  326267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-646473 san=[127.0.0.1 192.168.94.2 enable-default-cni-646473 localhost minikube]
	I1206 09:11:34.680462  326267 provision.go:177] copyRemoteCerts
	I1206 09:11:34.680535  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:34.680582  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.700580  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:34.801680  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:34.821508  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 09:11:34.839797  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:11:34.857703  326267 provision.go:87] duration metric: took 285.122924ms to configureAuth
	I1206 09:11:34.857743  326267 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:11:34.857898  326267 config.go:182] Loaded profile config "enable-default-cni-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:34.858002  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.877503  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:34.877714  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:34.877730  326267 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1206 09:11:33.025877  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:35.025975  315313 pod_ready.go:94] pod "coredns-66bc5c9577-54hvq" is "Ready"
	I1206 09:11:35.026018  315313 pod_ready.go:86] duration metric: took 32.506225301s for pod "coredns-66bc5c9577-54hvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.028723  315313 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.032925  315313 pod_ready.go:94] pod "etcd-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:35.032952  315313 pod_ready.go:86] duration metric: took 4.205718ms for pod "etcd-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.034860  315313 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.038859  315313 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:35.038882  315313 pod_ready.go:86] duration metric: took 3.999393ms for pod "kube-apiserver-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.040968  315313 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.224236  315313 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:35.224269  315313 pod_ready.go:86] duration metric: took 183.248703ms for pod "kube-controller-manager-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.423020  315313 pod_ready.go:83] waiting for pod "kube-proxy-86f62" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.823583  315313 pod_ready.go:94] pod "kube-proxy-86f62" is "Ready"
	I1206 09:11:35.823613  315313 pod_ready.go:86] duration metric: took 400.567675ms for pod "kube-proxy-86f62" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:36.023873  315313 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:36.422938  315313 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:36.422964  315313 pod_ready.go:86] duration metric: took 399.066206ms for pod "kube-scheduler-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:36.422976  315313 pod_ready.go:40] duration metric: took 33.907075764s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:11:36.470472  315313 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:11:36.472236  315313 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-213278" cluster and "default" namespace by default
	W1206 09:11:36.484004  315313 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 1c8fddb9-6391-4b0d-a230-5577ea41d4f6
	I1206 09:11:35.157873  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:11:35.157900  326267 machine.go:97] duration metric: took 4.049041898s to provisionDockerMachine
	I1206 09:11:35.157912  326267 client.go:176] duration metric: took 9.924823746s to LocalClient.Create
	I1206 09:11:35.157930  326267 start.go:167] duration metric: took 9.924890928s to libmachine.API.Create "enable-default-cni-646473"
	I1206 09:11:35.157940  326267 start.go:293] postStartSetup for "enable-default-cni-646473" (driver="docker")
	I1206 09:11:35.157952  326267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:35.158032  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:35.158080  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.176721  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.272966  326267 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:35.276622  326267 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:11:35.276653  326267 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:11:35.276665  326267 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:11:35.276720  326267 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:11:35.276814  326267 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:11:35.276927  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:11:35.284623  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:35.305227  326267 start.go:296] duration metric: took 147.272745ms for postStartSetup
	I1206 09:11:35.305576  326267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646473
	I1206 09:11:35.323549  326267 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/config.json ...
	I1206 09:11:35.323796  326267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:11:35.323832  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.341296  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.432423  326267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:11:35.437365  326267 start.go:128] duration metric: took 10.206572934s to createHost
	I1206 09:11:35.437391  326267 start.go:83] releasing machines lock for "enable-default-cni-646473", held for 10.206704523s
	I1206 09:11:35.437475  326267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646473
	I1206 09:11:35.457029  326267 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:35.457072  326267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:35.457088  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.457167  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.479949  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.480513  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.630857  326267 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:35.637890  326267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:11:35.674910  326267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:35.679693  326267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:35.679755  326267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:35.705796  326267 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:35.705829  326267 start.go:496] detecting cgroup driver to use...
	I1206 09:11:35.705865  326267 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:11:35.705925  326267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:11:35.722449  326267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:11:35.735090  326267 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:35.735143  326267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:35.752701  326267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:35.771352  326267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:35.871345  326267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:35.975416  326267 docker.go:234] disabling docker service ...
	I1206 09:11:35.975485  326267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:35.996517  326267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:36.010867  326267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:36.098603  326267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:36.185324  326267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:36.198388  326267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:36.213284  326267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:11:36.213341  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.224182  326267 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:11:36.224240  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.233449  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.242576  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.251575  326267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:36.259959  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.269003  326267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.282772  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.291693  326267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:36.299605  326267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:36.307406  326267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:36.389548  326267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:11:36.539889  326267 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:11:36.539953  326267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:11:36.544671  326267 start.go:564] Will wait 60s for crictl version
	I1206 09:11:36.544726  326267 ssh_runner.go:195] Run: which crictl
	I1206 09:11:36.548592  326267 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:11:36.576188  326267 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:11:36.576273  326267 ssh_runner.go:195] Run: crio --version
	I1206 09:11:36.609401  326267 ssh_runner.go:195] Run: crio --version
	I1206 09:11:36.642100  326267 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:11:36.643736  326267 cli_runner.go:164] Run: docker network inspect enable-default-cni-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:11:36.663802  326267 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:36.668215  326267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:36.678431  326267 kubeadm.go:884] updating cluster {Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:36.678558  326267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:36.678613  326267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:36.713603  326267 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:36.713622  326267 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:11:36.713662  326267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:36.741694  326267 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:36.741714  326267 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:36.741720  326267 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:11:36.741794  326267 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=enable-default-cni-646473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1206 09:11:36.741858  326267 ssh_runner.go:195] Run: crio config
	I1206 09:11:36.805140  326267 cni.go:84] Creating CNI manager for "bridge"
	I1206 09:11:36.805171  326267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:36.805200  326267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-646473 NodeName:enable-default-cni-646473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:36.805342  326267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-646473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:36.805411  326267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:36.813830  326267 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:36.813893  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:36.822383  326267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1206 09:11:36.836782  326267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:36.854084  326267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 09:11:36.867122  326267 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:36.870961  326267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:36.881408  326267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:36.979649  326267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:37.012718  326267 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473 for IP: 192.168.94.2
	I1206 09:11:37.012743  326267 certs.go:195] generating shared ca certs ...
	I1206 09:11:37.012763  326267 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.012921  326267 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:11:37.012960  326267 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:11:37.012970  326267 certs.go:257] generating profile certs ...
	I1206 09:11:37.013049  326267 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.key
	I1206 09:11:37.013067  326267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.crt with IP's: []
	I1206 09:11:37.055706  326267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.crt ...
	I1206 09:11:37.055731  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.crt: {Name:mk223dd95154e1c1e223ee8518badd993fb018ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.055885  326267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.key ...
	I1206 09:11:37.055899  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.key: {Name:mk9a0506762cbc4e8935306519e44ef9164cb98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.056020  326267 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490
	I1206 09:11:37.056045  326267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:11:37.167684  326267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490 ...
	I1206 09:11:37.167710  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490: {Name:mk102590e72d82fec69700259b31339d27768d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.167875  326267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490 ...
	I1206 09:11:37.167888  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490: {Name:mkf5aa274eb33ca99e49fffc824f65410fc0a3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.167955  326267 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt
	I1206 09:11:37.168104  326267 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key
	I1206 09:11:37.168186  326267 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key
	I1206 09:11:37.168204  326267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt with IP's: []
	I1206 09:11:37.317186  326267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt ...
	I1206 09:11:37.317211  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt: {Name:mk8e6dff39876a0be77ac8ec49087fba86bdc153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.317405  326267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key ...
	I1206 09:11:37.317427  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key: {Name:mka0f872fa6f9cddce1ccaf5709e1ac6e119f616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.317672  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:11:37.317724  326267 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:11:37.317740  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:11:37.317773  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:37.317812  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:37.317845  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:37.317912  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:37.318617  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:37.338173  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:37.356420  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:37.378593  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:11:37.403649  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 09:11:37.427513  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:11:37.449864  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:37.468206  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:11:37.492052  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:11:37.517440  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:11:37.538862  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:37.559407  326267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:37.572101  326267 ssh_runner.go:195] Run: openssl version
	I1206 09:11:37.578286  326267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.585708  326267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:37.593384  326267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.597316  326267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.597368  326267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.634059  326267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:37.642324  326267 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:37.650331  326267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.658957  326267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:11:37.667272  326267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.671120  326267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.671177  326267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.719773  326267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:11:37.728631  326267 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:11:37.738175  326267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.746662  326267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:11:37.754766  326267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.758746  326267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.758798  326267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.795391  326267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:37.803899  326267 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:37.812160  326267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:37.816327  326267 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:37.816394  326267 kubeadm.go:401] StartCluster: {Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:37.816472  326267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:37.816523  326267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:37.849262  326267 cri.go:89] found id: ""
	I1206 09:11:37.849331  326267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:37.857744  326267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:37.866021  326267 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:11:37.866094  326267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:37.874070  326267 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:37.874087  326267 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:37.874154  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:37.881978  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:37.882179  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:37.890221  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:37.898679  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:37.898738  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:37.906799  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:37.915384  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:37.915456  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:37.923113  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:37.930955  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:37.931013  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:37.939270  326267 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:11:37.987210  326267 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:37.987288  326267 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:38.011713  326267 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:11:38.011796  326267 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:11:38.011876  326267 kubeadm.go:319] OS: Linux
	I1206 09:11:38.012004  326267 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:11:38.012110  326267 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:11:38.012197  326267 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:11:38.012279  326267 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:11:38.012363  326267 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:11:38.012445  326267 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:11:38.012504  326267 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:11:38.012542  326267 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:11:38.078802  326267 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:38.078948  326267 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:38.079104  326267 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:38.087892  326267 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:38.090856  326267 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:38.091003  326267 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:38.091107  326267 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:38.330426  326267 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:38.346637  326267 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:38.379498  326267 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:38.775766  326267 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:38.825221  326267 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:38.825504  326267 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-646473 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:11:39.039323  326267 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:39.039540  326267 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-646473 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:11:39.138562  326267 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:39.382375  326267 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:39.798109  326267 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:39.798197  326267 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:40.110486  326267 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:40.675888  326267 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:40.985066  326267 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:41.322768  326267 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:41.504549  326267 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:41.506591  326267 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:41.510611  326267 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:42.803500  325034 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:42.803630  325034 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:42.803744  325034 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:11:42.803818  325034 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:11:42.803884  325034 kubeadm.go:319] OS: Linux
	I1206 09:11:42.803969  325034 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:11:42.804199  325034 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:11:42.804277  325034 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:11:42.804357  325034 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:11:42.804465  325034 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:11:42.804528  325034 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:11:42.804584  325034 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:11:42.804653  325034 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:11:42.804927  325034 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:42.805245  325034 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:42.805780  325034 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:42.806219  325034 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:42.808060  325034 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:42.808169  325034 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:42.808263  325034 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:42.808901  325034 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:42.809004  325034 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:42.809093  325034 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:42.809162  325034 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:42.809345  325034 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:42.809581  325034 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-646473 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1206 09:11:42.809664  325034 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:42.809861  325034 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-646473 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1206 09:11:42.810144  325034 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:42.810242  325034 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:42.810300  325034 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:42.810378  325034 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:42.810446  325034 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:42.810520  325034 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:42.810589  325034 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:42.810683  325034 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:42.810756  325034 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:42.810935  325034 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:42.811145  325034 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:42.812742  325034 out.go:252]   - Booting up control plane ...
	I1206 09:11:42.812866  325034 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:42.813037  325034 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:42.813140  325034 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:42.813304  325034 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:42.813425  325034 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:42.813553  325034 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:42.813663  325034 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:42.813712  325034 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:42.813875  325034 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:42.814078  325034 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:42.814215  325034 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501160129s
	I1206 09:11:42.814369  325034 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:42.814482  325034 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1206 09:11:42.814615  325034 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:42.814863  325034 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:42.815005  325034 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.491895482s
	I1206 09:11:42.815123  325034 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.995246944s
	I1206 09:11:42.815236  325034 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001551444s
	I1206 09:11:42.815504  325034 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:42.815701  325034 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:42.815798  325034 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:42.816011  325034 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:42.816081  325034 kubeadm.go:319] [bootstrap-token] Using token: 3e6j9h.fmqb8cmf69r2qrmq
	I1206 09:11:42.817497  325034 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:42.817704  325034 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:42.817847  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:42.818022  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:42.818608  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:42.818848  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:42.819071  325034 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:42.819239  325034 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:42.819305  325034 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:42.819375  325034 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:42.819384  325034 kubeadm.go:319] 
	I1206 09:11:42.819471  325034 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:42.819480  325034 kubeadm.go:319] 
	I1206 09:11:42.819599  325034 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:42.819607  325034 kubeadm.go:319] 
	I1206 09:11:42.819644  325034 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:42.819732  325034 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:42.819809  325034 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:42.819817  325034 kubeadm.go:319] 
	I1206 09:11:42.819895  325034 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:42.819903  325034 kubeadm.go:319] 
	I1206 09:11:42.819972  325034 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:42.819978  325034 kubeadm.go:319] 
	I1206 09:11:42.820082  325034 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:42.820197  325034 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:42.820292  325034 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:42.820296  325034 kubeadm.go:319] 
	I1206 09:11:42.820417  325034 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:42.820521  325034 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:42.820526  325034 kubeadm.go:319] 
	I1206 09:11:42.820640  325034 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3e6j9h.fmqb8cmf69r2qrmq \
	I1206 09:11:42.820783  325034 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:11:42.820812  325034 kubeadm.go:319] 	--control-plane 
	I1206 09:11:42.820821  325034 kubeadm.go:319] 
	I1206 09:11:42.820944  325034 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:42.820953  325034 kubeadm.go:319] 
	I1206 09:11:42.821082  325034 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3e6j9h.fmqb8cmf69r2qrmq \
	I1206 09:11:42.821267  325034 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:11:42.821286  325034 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 09:11:42.823450  325034 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1206 09:11:42.824919  325034 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:11:42.824977  325034 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1206 09:11:42.830485  325034 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1206 09:11:42.830520  325034 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1206 09:11:42.865690  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:11:43.336327  325034 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:43.336404  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-646473 minikube.k8s.io/updated_at=2025_12_06T09_11_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=custom-flannel-646473 minikube.k8s.io/primary=true
	I1206 09:11:43.336404  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:43.464518  325034 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:43.464622  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:43.964752  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:44.465612  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:41.512259  326267 out.go:252]   - Booting up control plane ...
	I1206 09:11:41.512369  326267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:41.512487  326267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:41.513001  326267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:41.527750  326267 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:41.527861  326267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:41.534667  326267 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:41.534779  326267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:41.534886  326267 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:41.650684  326267 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:41.650784  326267 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:42.152609  326267 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.962911ms
	I1206 09:11:42.158636  326267 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:42.158755  326267 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:11:42.158870  326267 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:42.159025  326267 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:44.162088  326267 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.003121619s
	I1206 09:11:44.572709  326267 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.413940205s
	I1206 09:11:46.160573  326267 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001845421s
	I1206 09:11:46.180884  326267 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:46.192419  326267 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:46.202611  326267 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:46.202946  326267 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:46.212959  326267 kubeadm.go:319] [bootstrap-token] Using token: 3qaobr.awz696n2m6r05jie
	I1206 09:11:44.964883  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:45.465281  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:45.965170  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:46.465441  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:46.965247  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:47.464763  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:47.542264  325034 kubeadm.go:1114] duration metric: took 4.205911766s to wait for elevateKubeSystemPrivileges
	I1206 09:11:47.542308  325034 kubeadm.go:403] duration metric: took 15.758285438s to StartCluster
	I1206 09:11:47.542332  325034 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:47.542399  325034 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:11:47.544833  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:47.545113  325034 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:11:47.545302  325034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:47.545840  325034 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:47.545613  325034 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:11:47.545904  325034 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-646473"
	I1206 09:11:47.545913  325034 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-646473"
	I1206 09:11:47.545922  325034 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-646473"
	I1206 09:11:47.545926  325034 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-646473"
	I1206 09:11:47.545956  325034 host.go:66] Checking if "custom-flannel-646473" exists ...
	I1206 09:11:47.546409  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:47.546566  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:47.547976  325034 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:47.550274  325034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:46.214472  326267 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:46.214659  326267 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:46.219178  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:46.224673  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:46.227357  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:46.230653  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:46.233552  326267 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:46.566101  326267 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:46.987904  326267 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:47.573363  326267 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:47.573387  326267 kubeadm.go:319] 
	I1206 09:11:47.573458  326267 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:47.573471  326267 kubeadm.go:319] 
	I1206 09:11:47.573562  326267 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:47.573569  326267 kubeadm.go:319] 
	I1206 09:11:47.573602  326267 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:47.573673  326267 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:47.573734  326267 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:47.573744  326267 kubeadm.go:319] 
	I1206 09:11:47.573805  326267 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:47.573811  326267 kubeadm.go:319] 
	I1206 09:11:47.573868  326267 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:47.573873  326267 kubeadm.go:319] 
	I1206 09:11:47.573932  326267 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:47.574051  326267 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:47.574132  326267 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:47.574138  326267 kubeadm.go:319] 
	I1206 09:11:47.574240  326267 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:47.574341  326267 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:47.574352  326267 kubeadm.go:319] 
	I1206 09:11:47.574465  326267 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3qaobr.awz696n2m6r05jie \
	I1206 09:11:47.574621  326267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:11:47.574648  326267 kubeadm.go:319] 	--control-plane 
	I1206 09:11:47.574653  326267 kubeadm.go:319] 
	I1206 09:11:47.574748  326267 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:47.574756  326267 kubeadm.go:319] 
	I1206 09:11:47.574847  326267 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3qaobr.awz696n2m6r05jie \
	I1206 09:11:47.574960  326267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:11:47.580919  326267 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:11:47.581075  326267 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:47.581263  326267 cni.go:84] Creating CNI manager for "bridge"
	I1206 09:11:47.581566  325034 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:47.582958  326267 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:11:47.582726  325034 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-646473"
	I1206 09:11:47.582771  325034 host.go:66] Checking if "custom-flannel-646473" exists ...
	I1206 09:11:47.583277  325034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:47.583301  325034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:47.583354  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:47.584185  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:47.623763  325034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:47.623788  325034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:47.623866  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:47.625679  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:47.665792  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:47.722872  325034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:47.759092  325034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:47.774326  325034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:47.800649  325034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:48.016694  325034 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:48.018709  325034 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-646473" to be "Ready" ...
	I1206 09:11:48.256074  325034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:11:48.257399  325034 addons.go:530] duration metric: took 711.783945ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:11:48.522543  325034 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-646473" context rescaled to 1 replicas
	I1206 09:11:47.589097  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:11:47.605879  326267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:11:47.632115  326267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:47.632383  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-646473 minikube.k8s.io/updated_at=2025_12_06T09_11_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=enable-default-cni-646473 minikube.k8s.io/primary=true
	I1206 09:11:47.632427  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:47.781240  326267 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:47.781348  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:48.281668  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:48.782228  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:49.282182  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:49.782202  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 06 09:11:24 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:24.518489979Z" level=info msg="Started container" PID=1735 containerID=e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper id=589ad6d3-c40a-4000-a238-5e9a96a758eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c54cce5adf6a742189a3b8db528baf390361e40c13d01d5f482ebe1bbbaaac3
	Dec 06 09:11:24 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:24.555509017Z" level=info msg="Removing container: e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e" id=6ba68220-f84a-41c8-b4e8-329b7f357f31 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:11:24 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:24.577169467Z" level=info msg="Removed container e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=6ba68220-f84a-41c8-b4e8-329b7f357f31 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.580245767Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c91b7b83-ab7d-466b-af56-030dfa147cf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.581253993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3b375cdf-52e1-4b5a-9475-060eb27a6654 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.58238506Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=16ffb1a6-2453-4bf2-ac7b-edf727cba1ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.582514041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.58664374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.586775684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9443cd57345740f1f7d3da5a170b730a39fec0d7f900137c3e8deced77db2461/merged/etc/passwd: no such file or directory"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.586797229Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9443cd57345740f1f7d3da5a170b730a39fec0d7f900137c3e8deced77db2461/merged/etc/group: no such file or directory"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.587012941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.614431818Z" level=info msg="Created container 05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8: kube-system/storage-provisioner/storage-provisioner" id=16ffb1a6-2453-4bf2-ac7b-edf727cba1ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.615134476Z" level=info msg="Starting container: 05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8" id=4f283ae0-ff9a-49ff-8de4-f6ab588d1369 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.616847123Z" level=info msg="Started container" PID=1753 containerID=05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8 description=kube-system/storage-provisioner/storage-provisioner id=4f283ae0-ff9a-49ff-8de4-f6ab588d1369 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f52e745bc3b32f57b45b7149be036dfb9506e39f5d73e150d0b154b62a941679
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.448692941Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9446302e-3ecd-41b0-ae0d-e10bd5081ec1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.449779697Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1db153fc-72b8-4d1b-a83a-161a28322dfc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.450875416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=2437f75a-1aad-4a79-868c-a5351314e27f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.451041444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.458022006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.458561478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.489539397Z" level=info msg="Created container 1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=2437f75a-1aad-4a79-868c-a5351314e27f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.490293753Z" level=info msg="Starting container: 1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c" id=ccfff5ff-6afb-44d1-a3a3-8495449eaab0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.492588086Z" level=info msg="Started container" PID=1789 containerID=1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper id=ccfff5ff-6afb-44d1-a3a3-8495449eaab0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c54cce5adf6a742189a3b8db528baf390361e40c13d01d5f482ebe1bbbaaac3
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.62876197Z" level=info msg="Removing container: e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14" id=038a55a7-db83-4e0c-b19c-db30aa34aad1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.638916579Z" level=info msg="Removed container e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=038a55a7-db83-4e0c-b19c-db30aa34aad1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1eccc9eb14811       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   2c54cce5adf6a       dashboard-metrics-scraper-6ffb444bf9-676dh             kubernetes-dashboard
	05ac516aec1ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   f52e745bc3b32       storage-provisioner                                    kube-system
	51194e071a8c4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   f92805313c19d       kubernetes-dashboard-855c9754f9-hjxhr                  kubernetes-dashboard
	deb1986899497       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   c9a2cb815668e       coredns-66bc5c9577-54hvq                               kube-system
	88dfd712e3100       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   a510cdba763c5       busybox                                                default
	90f6f1c662ee4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   9af5f030a2006       kindnet-4jw2t                                          kube-system
	79f8e846255f8       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           50 seconds ago      Running             kube-proxy                  0                   412f7e2bdf8d8       kube-proxy-86f62                                       kube-system
	cfa9b86f728e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   f52e745bc3b32       storage-provisioner                                    kube-system
	993bd9094e371       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   30b167865f45e       etcd-default-k8s-diff-port-213278                      kube-system
	a151df7271144       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           53 seconds ago      Running             kube-apiserver              0                   f6e347e41f3f4       kube-apiserver-default-k8s-diff-port-213278            kube-system
	8fe294be79620       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           53 seconds ago      Running             kube-controller-manager     0                   4e5fbbcf993f6       kube-controller-manager-default-k8s-diff-port-213278   kube-system
	877ac8d6fa140       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           53 seconds ago      Running             kube-scheduler              0                   57c47552e7d48       kube-scheduler-default-k8s-diff-port-213278            kube-system
	
	
	==> coredns [deb19868994976b1519511ddc4ae28885b0e5e36a5be9d305b98fc87796e836e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42062 - 45557 "HINFO IN 112575248579643360.8630264767851062528. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.023676365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-213278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-213278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=default-k8s-diff-port-213278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-213278
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-213278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                eec647d7-7697-4ad8-a7c7-fd1943fc3364
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-54hvq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-213278                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m21s
	  kube-system                 kindnet-4jw2t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-213278             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-213278    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-86f62                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-213278             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-676dh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hjxhr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 50s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m21s                  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m21s                  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s                  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m17s                  node-controller  Node default-k8s-diff-port-213278 event: Registered Node default-k8s-diff-port-213278 in Controller
	  Normal  NodeReady                95s                    kubelet          Node default-k8s-diff-port-213278 status is now: NodeReady
	  Normal  Starting                 54s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)      kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)      kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)      kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-213278 event: Registered Node default-k8s-diff-port-213278 in Controller
	
	
	==> dmesg <==
	[  +0.092169] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028133] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	
	
	==> etcd [993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc] <==
	{"level":"warn","ts":"2025-12-06T09:11:00.098939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.108304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.116482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.125228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.133043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.142977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.151461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.159256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.170374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.179132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.185984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.206772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.214581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.221768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.277940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:08.788972Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.079155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597538724368267 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:601 > success:<request_put:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:755 >> failure:<request_range:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:11:08.789324Z","caller":"traceutil/trace.go:172","msg":"trace[1383292509] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"174.216189ms","start":"2025-12-06T09:11:08.615098Z","end":"2025-12-06T09:11:08.789314Z","steps":["trace[1383292509] 'process raft request'  (duration: 174.146228ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:11:08.789730Z","caller":"traceutil/trace.go:172","msg":"trace[1027492406] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"174.506858ms","start":"2025-12-06T09:11:08.614784Z","end":"2025-12-06T09:11:08.789291Z","steps":["trace[1027492406] 'process raft request'  (duration: 54.632736ms)","trace[1027492406] 'compare'  (duration: 118.960804ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:11:09.014298Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.323443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-213278\" limit:1 ","response":"range_response_count:1 size:8241"}
	{"level":"info","ts":"2025-12-06T09:11:09.014610Z","caller":"traceutil/trace.go:172","msg":"trace[1409804178] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-213278; range_end:; response_count:1; response_revision:614; }","duration":"139.679221ms","start":"2025-12-06T09:11:08.874905Z","end":"2025-12-06T09:11:09.014584Z","steps":["trace[1409804178] 'agreement among raft nodes before linearized reading'  (duration: 22.735389ms)","trace[1409804178] 'range keys from in-memory index tree'  (duration: 116.490072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:11:09.015146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.288439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597538724368273 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh.187e954e67f7a91e\" mod_revision:608 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh.187e954e67f7a91e\" value_size:766 lease:499225501869591943 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh.187e954e67f7a91e\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:11:09.015694Z","caller":"traceutil/trace.go:172","msg":"trace[1028879696] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"144.368605ms","start":"2025-12-06T09:11:08.871316Z","end":"2025-12-06T09:11:09.015684Z","steps":["trace[1028879696] 'process raft request'  (duration: 26.43698ms)","trace[1028879696] 'compare'  (duration: 116.151895ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:11:09.015564Z","caller":"traceutil/trace.go:172","msg":"trace[231883224] linearizableReadLoop","detail":"{readStateIndex:649; appliedIndex:648; }","duration":"103.847548ms","start":"2025-12-06T09:11:08.911698Z","end":"2025-12-06T09:11:09.015546Z","steps":["trace[231883224] 'read index received'  (duration: 43.92µs)","trace[231883224] 'applied index is now lower than readState.Index'  (duration: 103.802236ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:11:09.015660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.95578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:11:09.017847Z","caller":"traceutil/trace.go:172","msg":"trace[919212663] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:615; }","duration":"106.069712ms","start":"2025-12-06T09:11:08.911693Z","end":"2025-12-06T09:11:09.017763Z","steps":["trace[919212663] 'agreement among raft nodes before linearized reading'  (duration: 103.881071ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:52 up 54 min,  0 user,  load average: 4.11, 3.33, 2.22
	Linux default-k8s-diff-port-213278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90f6f1c662ee4be789481c3d36c939de768e2a68031835acafba34c8bd8c2c0a] <==
	I1206 09:11:02.090848       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:11:02.091163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:11:02.091318       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:11:02.091340       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:11:02.091364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:11:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:11:02.297741       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:11:02.389216       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:11:02.389241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:11:02.389780       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:11:02.691740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:11:02.691764       1 metrics.go:72] Registering metrics
	I1206 09:11:02.691842       1 controller.go:711] "Syncing nftables rules"
	I1206 09:11:12.297118       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:12.297191       1 main.go:301] handling current node
	I1206 09:11:22.299148       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:22.299188       1 main.go:301] handling current node
	I1206 09:11:32.297846       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:32.297895       1 main.go:301] handling current node
	I1206 09:11:42.302083       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:42.302131       1 main.go:301] handling current node
	I1206 09:11:52.299141       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:52.299281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756] <==
	I1206 09:11:00.819622       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:11:00.819768       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:11:00.819928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:11:00.820734       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:11:00.821926       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:11:00.822045       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 09:11:00.822103       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:11:00.822113       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:11:00.822119       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:11:00.822126       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:11:00.823140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:11:00.832056       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:11:00.886239       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:11:00.890479       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:11:01.224817       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:11:01.264715       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:11:01.291171       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:11:01.301293       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:11:01.311045       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:11:01.382250       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.181.213"}
	I1206 09:11:01.400757       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.138.232"}
	I1206 09:11:01.721686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:11:04.442750       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:11:04.541538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:11:04.689659       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7] <==
	I1206 09:11:04.112408       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:11:04.114654       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:11:04.116928       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:11:04.118418       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:11:04.119697       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:11:04.135475       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:11:04.135609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:11:04.135625       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:11:04.135637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:11:04.135905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:11:04.136027       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:11:04.136535       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:11:04.136574       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:11:04.136595       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:11:04.136630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:11:04.136737       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-213278"
	I1206 09:11:04.136781       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:11:04.141548       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:11:04.151742       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:11:04.153927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:11:04.155100       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:11:04.156304       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:11:04.156933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:11:04.157039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:11:04.157480       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	
	
	==> kube-proxy [79f8e846255f85ab83dd33f39644030d86c3a149164871b704e48bf6ca0888b1] <==
	I1206 09:11:01.881769       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:11:01.962762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:11:02.063828       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:11:02.063872       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 09:11:02.064050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:11:02.086375       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:11:02.086555       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:11:02.095074       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:11:02.095512       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:11:02.095544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:02.096950       1 config.go:200] "Starting service config controller"
	I1206 09:11:02.097189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:11:02.097109       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:11:02.097285       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:11:02.097148       1 config.go:309] "Starting node config controller"
	I1206 09:11:02.097429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:11:02.097593       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:11:02.097165       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:11:02.097737       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:11:02.198407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:11:02.198433       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:11:02.198411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13] <==
	I1206 09:10:59.431370       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:11:00.747386       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:11:00.747430       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:11:00.747729       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:11:00.747788       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:11:00.786170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:11:00.793684       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:00.801633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:11:00.801734       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:11:00.801798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:11:00.802728       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:11:00.902972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:11:04 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:04.774777     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e257799f-93c1-460e-8143-bc16fc0365fd-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hjxhr\" (UID: \"e257799f-93c1-460e-8143-bc16fc0365fd\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjxhr"
	Dec 06 09:11:08 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:08.502449     732 scope.go:117] "RemoveContainer" containerID="ed3fe257958d9f09849d50439762531b05b359e7d9844efbec7d85ec34bd3680"
	Dec 06 09:11:09 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:09.508069     732 scope.go:117] "RemoveContainer" containerID="ed3fe257958d9f09849d50439762531b05b359e7d9844efbec7d85ec34bd3680"
	Dec 06 09:11:09 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:09.508258     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:09 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:09.508466     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:10 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:10.512676     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:10 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:10.512925     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:12 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:12.531388     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjxhr" podStartSLOduration=1.93236674 podStartE2EDuration="8.531362566s" podCreationTimestamp="2025-12-06 09:11:04 +0000 UTC" firstStartedPulling="2025-12-06 09:11:04.994874721 +0000 UTC m=+6.640120021" lastFinishedPulling="2025-12-06 09:11:11.593870544 +0000 UTC m=+13.239115847" observedRunningTime="2025-12-06 09:11:12.531288994 +0000 UTC m=+14.176534297" watchObservedRunningTime="2025-12-06 09:11:12.531362566 +0000 UTC m=+14.176607871"
	Dec 06 09:11:13 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:13.575267     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:13 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:13.575966     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:24.448154     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:24.554263     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:24.554637     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:24.554842     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:32 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:32.579794     732 scope.go:117] "RemoveContainer" containerID="cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf"
	Dec 06 09:11:33 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:33.575524     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:33 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:33.575699     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:48.448214     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:48.627471     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:48.627696     732 scope.go:117] "RemoveContainer" containerID="1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:48.627905     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: kubelet.service: Consumed 1.785s CPU time.
	
	
	==> kubernetes-dashboard [51194e071a8c47771f617e61bfe1e35cfe1b6d522ef2161e639970de26ba9592] <==
	2025/12/06 09:11:11 Using namespace: kubernetes-dashboard
	2025/12/06 09:11:11 Using in-cluster config to connect to apiserver
	2025/12/06 09:11:11 Using secret token for csrf signing
	2025/12/06 09:11:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:11:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:11:11 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:11:11 Generating JWE encryption key
	2025/12/06 09:11:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:11:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:11:11 Initializing JWE encryption key from synchronized object
	2025/12/06 09:11:11 Creating in-cluster Sidecar client
	2025/12/06 09:11:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:11:11 Serving insecurely on HTTP port: 9090
	2025/12/06 09:11:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:11:11 Starting overwatch
	
	
	==> storage-provisioner [05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8] <==
	I1206 09:11:32.631305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:11:32.639460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:11:32.639509       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:11:32.641971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:36.097280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:40.358213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:43.957938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:47.012779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:50.035458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:50.040660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:50.040825       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:11:50.041023       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-213278_1c8b24ce-1322-4565-8a77-43e676b3b964!
	I1206 09:11:50.040974       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"649337db-0a79-4b9c-a481-f9515237bbf3", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-213278_1c8b24ce-1322-4565-8a77-43e676b3b964 became leader
	W1206 09:11:50.043051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:50.050011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:50.142022       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-213278_1c8b24ce-1322-4565-8a77-43e676b3b964!
	W1206 09:11:52.055160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:52.059905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf] <==
	I1206 09:11:01.836278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:11:31.838317       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278: exit status 2 (372.038153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-213278
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-213278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf",
	        "Created": "2025-12-06T09:09:12.980409254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:10:51.926966533Z",
	            "FinishedAt": "2025-12-06T09:10:50.976833337Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/hosts",
	        "LogPath": "/var/lib/docker/containers/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf/7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf-json.log",
	        "Name": "/default-k8s-diff-port-213278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-213278:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-213278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ed3f206e5bb224b016ca80ac8d26a37704033e5ec41fdf32b20d34093c7a4cf",
	                "LowerDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd-init/diff:/var/lib/docker/overlay2/a31e92d5945d5279c396111b4b44aafb6cb691b1b041681dfe00969e027ee03c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7ebb2d5cfc2defd34548ff191018bcd4d2b00981152e72b70569212964363fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-213278",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-213278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-213278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-213278",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-213278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bb7488f4d5f446be586ffec379aec4de46a8f9b8710623a08111f3a219863f51",
	            "SandboxKey": "/var/run/docker/netns/bb7488f4d5f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-213278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57bdd7b529719bb4288cd247e9e4bc85dc55500f3378aa22459233ae5de1bd98",
	                    "EndpointID": "e474e9bad674a9b737e3b49d12b170ef83618be979753e5e606306b7c222d4ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "5a:3d:98:b9:d1:96",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-213278",
	                        "7ed3f206e5bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278: exit status 2 (368.154617ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-213278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-213278 logs -n 25: (1.237685144s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-646473 sudo crictl ps --all                                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;   │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo ip a s                                                   │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo ip r s                                                   │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo iptables-save                                            │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo iptables -t nat -L -n -v                                 │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl status kubelet --all --full --no-pager         │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl cat kubelet --no-pager                         │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo journalctl -xeu kubelet --all --full --no-pager          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/kubernetes/kubelet.conf                         │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ pause   │ -p default-k8s-diff-port-213278 --alsologtostderr -v=1                         │ default-k8s-diff-port-213278 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo cat /var/lib/kubelet/config.yaml                         │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl status docker --all --full --no-pager          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl cat docker --no-pager                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/docker/daemon.json                              │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo docker system info                                       │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl status cri-docker --all --full --no-pager      │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl cat cri-docker --no-pager                      │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo cat /usr/lib/systemd/system/cri-docker.service           │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cri-dockerd --version                                    │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo systemctl status containerd --all --full --no-pager      │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ ssh     │ -p calico-646473 sudo systemctl cat containerd --no-pager                      │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /lib/systemd/system/containerd.service               │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ ssh     │ -p calico-646473 sudo cat /etc/containerd/config.toml                          │ calico-646473                │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:11:25
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:11:25.008640  326267 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:25.008746  326267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:25.008755  326267 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:25.008759  326267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:25.008935  326267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:11:25.009419  326267 out.go:368] Setting JSON to false
	I1206 09:11:25.010639  326267 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3236,"bootTime":1765009049,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:11:25.010721  326267 start.go:143] virtualization: kvm guest
	I1206 09:11:25.012605  326267 out.go:179] * [enable-default-cni-646473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:11:25.014018  326267 notify.go:221] Checking for updates...
	I1206 09:11:25.014099  326267 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:11:25.015544  326267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:11:25.017040  326267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:11:25.018204  326267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:11:25.019363  326267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:11:25.021410  326267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:11:25.023475  326267 config.go:182] Loaded profile config "calico-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:25.023627  326267 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:25.023762  326267 config.go:182] Loaded profile config "default-k8s-diff-port-213278": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:25.023937  326267 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:11:25.053642  326267 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:11:25.053774  326267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:25.122354  326267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-12-06 09:11:25.110265581 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:25.122461  326267 docker.go:319] overlay module found
	I1206 09:11:25.124235  326267 out.go:179] * Using the docker driver based on user configuration
	I1206 09:11:25.125584  326267 start.go:309] selected driver: docker
	I1206 09:11:25.125596  326267 start.go:927] validating driver "docker" against <nil>
	I1206 09:11:25.125607  326267 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:11:25.126235  326267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:11:25.196641  326267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-06 09:11:25.186082497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:11:25.196870  326267 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1206 09:11:25.197177  326267 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1206 09:11:25.197248  326267 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:25.199795  326267 out.go:179] * Using Docker driver with root privileges
	I1206 09:11:25.201040  326267 cni.go:84] Creating CNI manager for "bridge"
	I1206 09:11:25.201064  326267 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:11:25.201158  326267 start.go:353] cluster config:
	{Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:25.202635  326267 out.go:179] * Starting "enable-default-cni-646473" primary control-plane node in "enable-default-cni-646473" cluster
	I1206 09:11:25.203870  326267 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 09:11:25.205060  326267 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:11:25.206204  326267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:25.206251  326267 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:11:25.206271  326267 cache.go:65] Caching tarball of preloaded images
	I1206 09:11:25.206302  326267 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:11:25.206375  326267 preload.go:238] Found /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:11:25.206389  326267 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:11:25.206503  326267 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/config.json ...
	I1206 09:11:25.206530  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/config.json: {Name:mk9b5b4044be3ee07f39ad55a326506414bd4e8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:25.230471  326267 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:11:25.230502  326267 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:11:25.230522  326267 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:11:25.230556  326267 start.go:360] acquireMachinesLock for enable-default-cni-646473: {Name:mk4c0a92bdf98edc18817404e4286b7b9a47295b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:11:25.230671  326267 start.go:364] duration metric: took 93.874µs to acquireMachinesLock for "enable-default-cni-646473"
	I1206 09:11:25.230701  326267 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:11:25.230778  326267 start.go:125] createHost starting for "" (driver="docker")
	W1206 09:11:23.525678  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:26.025807  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:25.018957  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Running}}
	I1206 09:11:25.040502  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:25.062349  325034 cli_runner.go:164] Run: docker exec custom-flannel-646473 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:11:25.112595  325034 oci.go:144] the created container "custom-flannel-646473" has a running status.
	I1206 09:11:25.112631  325034 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa...
	I1206 09:11:25.305604  325034 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:11:25.344039  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:25.370690  325034 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:11:25.370710  325034 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-646473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:11:25.419353  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:25.440910  325034 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:25.441012  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:25.464597  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:25.464939  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:25.464963  325034 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:25.606135  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-646473
	
	I1206 09:11:25.606170  325034 ubuntu.go:182] provisioning hostname "custom-flannel-646473"
	I1206 09:11:25.606236  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:25.629601  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:25.629943  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:25.629971  325034 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-646473 && echo "custom-flannel-646473" | sudo tee /etc/hostname
	I1206 09:11:25.802369  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-646473
	
	I1206 09:11:25.802452  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:25.823751  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:25.824082  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:25.824114  325034 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-646473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-646473/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-646473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:25.957021  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:25.957056  325034 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:11:25.957079  325034 ubuntu.go:190] setting up certificates
	I1206 09:11:25.957091  325034 provision.go:84] configureAuth start
	I1206 09:11:25.957163  325034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-646473
	I1206 09:11:25.979268  325034 provision.go:143] copyHostCerts
	I1206 09:11:25.979337  325034 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:11:25.979350  325034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:11:25.979434  325034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:11:25.979556  325034 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:11:25.979569  325034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:11:25.979608  325034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:11:25.979700  325034 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:11:25.979714  325034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:11:25.979755  325034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:11:25.979847  325034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-646473 san=[127.0.0.1 192.168.103.2 custom-flannel-646473 localhost minikube]
	I1206 09:11:26.045548  325034 provision.go:177] copyRemoteCerts
	I1206 09:11:26.045600  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:26.045632  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.067303  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.161560  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:26.184241  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 09:11:26.203126  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:11:26.223076  325034 provision.go:87] duration metric: took 265.970299ms to configureAuth
	I1206 09:11:26.223109  325034 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:11:26.223318  325034 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:26.223448  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.241479  325034 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:26.241731  325034 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1206 09:11:26.241752  325034 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:11:26.527104  325034 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:11:26.527131  325034 machine.go:97] duration metric: took 1.086200025s to provisionDockerMachine
	I1206 09:11:26.527143  325034 client.go:176] duration metric: took 6.56265812s to LocalClient.Create
	I1206 09:11:26.527165  325034 start.go:167] duration metric: took 6.56272307s to libmachine.API.Create "custom-flannel-646473"
	I1206 09:11:26.527174  325034 start.go:293] postStartSetup for "custom-flannel-646473" (driver="docker")
	I1206 09:11:26.527185  325034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:26.527242  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:26.527279  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.554840  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.658207  325034 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:26.663060  325034 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:11:26.663092  325034 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:11:26.663105  325034 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:11:26.663158  325034 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:11:26.663257  325034 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:11:26.663370  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:11:26.671168  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:26.691872  325034 start.go:296] duration metric: took 164.684398ms for postStartSetup
	I1206 09:11:26.692778  325034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-646473
	I1206 09:11:26.712702  325034 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/config.json ...
	I1206 09:11:26.713015  325034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:11:26.713069  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.734851  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.831647  325034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:11:26.836670  325034 start.go:128] duration metric: took 6.876684754s to createHost
	I1206 09:11:26.836698  325034 start.go:83] releasing machines lock for "custom-flannel-646473", held for 6.87681625s
	I1206 09:11:26.836771  325034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-646473
	I1206 09:11:26.864696  325034 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:26.864761  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.864892  325034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:26.864981  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:26.888912  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:26.889252  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:27.062274  325034 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:27.070224  325034 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:11:27.105591  325034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:27.110464  325034 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:27.110524  325034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:27.140131  325034 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:27.140160  325034 start.go:496] detecting cgroup driver to use...
	I1206 09:11:27.140195  325034 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:11:27.140245  325034 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:11:27.157192  325034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:11:27.170232  325034 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:27.170291  325034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:27.190081  325034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:27.211706  325034 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:27.316613  325034 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:27.426446  325034 docker.go:234] disabling docker service ...
	I1206 09:11:27.426511  325034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:27.448537  325034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:27.462027  325034 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:27.563923  325034 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:27.700610  325034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:27.720871  325034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:27.745451  325034 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:11:27.745515  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.762432  325034 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:11:27.762508  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.777810  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.792236  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.806841  325034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:27.821669  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.834272  325034 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.858976  325034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:27.872035  325034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:27.882857  325034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:27.893722  325034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:28.024117  325034 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:11:25.232752  326267 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:11:25.233042  326267 start.go:159] libmachine.API.Create for "enable-default-cni-646473" (driver="docker")
	I1206 09:11:25.233078  326267 client.go:173] LocalClient.Create starting
	I1206 09:11:25.233194  326267 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem
	I1206 09:11:25.233238  326267 main.go:143] libmachine: Decoding PEM data...
	I1206 09:11:25.233268  326267 main.go:143] libmachine: Parsing certificate...
	I1206 09:11:25.233337  326267 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem
	I1206 09:11:25.233368  326267 main.go:143] libmachine: Decoding PEM data...
	I1206 09:11:25.233389  326267 main.go:143] libmachine: Parsing certificate...
	I1206 09:11:25.233737  326267 cli_runner.go:164] Run: docker network inspect enable-default-cni-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:11:25.255391  326267 cli_runner.go:211] docker network inspect enable-default-cni-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:11:25.255475  326267 network_create.go:284] running [docker network inspect enable-default-cni-646473] to gather additional debugging logs...
	I1206 09:11:25.255496  326267 cli_runner.go:164] Run: docker network inspect enable-default-cni-646473
	W1206 09:11:25.276617  326267 cli_runner.go:211] docker network inspect enable-default-cni-646473 returned with exit code 1
	I1206 09:11:25.276651  326267 network_create.go:287] error running [docker network inspect enable-default-cni-646473]: docker network inspect enable-default-cni-646473: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-646473 not found
	I1206 09:11:25.276674  326267 network_create.go:289] output of [docker network inspect enable-default-cni-646473]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-646473 not found
	
	** /stderr **
	I1206 09:11:25.276792  326267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:11:25.297402  326267 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
	I1206 09:11:25.298240  326267 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8e3326c841ae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:98:ee:f3:0b:a9} reservation:<nil>}
	I1206 09:11:25.299119  326267 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7af411946b0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:ab:a1:53:1d:7e} reservation:<nil>}
	I1206 09:11:25.299707  326267 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-80080615a73e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:81:2b:23:3c:10} reservation:<nil>}
	I1206 09:11:25.300206  326267 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-57bdd7b52971 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:54:0b:60:1c:a3} reservation:<nil>}
	I1206 09:11:25.301119  326267 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fa0090}
	I1206 09:11:25.301146  326267 network_create.go:124] attempt to create docker network enable-default-cni-646473 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1206 09:11:25.301202  326267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-646473 enable-default-cni-646473
	I1206 09:11:25.373826  326267 network_create.go:108] docker network enable-default-cni-646473 192.168.94.0/24 created
	I1206 09:11:25.373858  326267 kic.go:121] calculated static IP "192.168.94.2" for the "enable-default-cni-646473" container
	I1206 09:11:25.373940  326267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:11:25.399108  326267 cli_runner.go:164] Run: docker volume create enable-default-cni-646473 --label name.minikube.sigs.k8s.io=enable-default-cni-646473 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:11:25.420445  326267 oci.go:103] Successfully created a docker volume enable-default-cni-646473
	I1206 09:11:25.420513  326267 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-646473-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646473 --entrypoint /usr/bin/test -v enable-default-cni-646473:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:11:25.847809  326267 oci.go:107] Successfully prepared a docker volume enable-default-cni-646473
	I1206 09:11:25.847890  326267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:25.847906  326267 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:11:25.847977  326267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:11:30.487929  325034 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.463768399s)
	I1206 09:11:30.487959  325034 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:11:30.488022  325034 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:11:30.493228  325034 start.go:564] Will wait 60s for crictl version
	I1206 09:11:30.493314  325034 ssh_runner.go:195] Run: which crictl
	I1206 09:11:30.498080  325034 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:11:30.531493  325034 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:11:30.531616  325034 ssh_runner.go:195] Run: crio --version
	I1206 09:11:30.571569  325034 ssh_runner.go:195] Run: crio --version
	I1206 09:11:30.606846  325034 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1206 09:11:28.028632  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	W1206 09:11:30.526635  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:30.608198  325034 cli_runner.go:164] Run: docker network inspect custom-flannel-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:11:30.632841  325034 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:30.637666  325034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:30.651295  325034 kubeadm.go:884] updating cluster {Name:custom-flannel-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-646473 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:30.651444  325034 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:30.651495  325034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:30.687814  325034 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:30.687837  325034 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:11:30.687883  325034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:30.721340  325034 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:30.721370  325034 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:30.721379  325034 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1206 09:11:30.721481  325034 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-646473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1206 09:11:30.721562  325034 ssh_runner.go:195] Run: crio config
	I1206 09:11:30.771509  325034 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 09:11:30.771553  325034 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:30.771582  325034 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-646473 NodeName:custom-flannel-646473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:30.771734  325034 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-646473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:30.771799  325034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:30.781522  325034 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:30.781593  325034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:30.791062  325034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1206 09:11:30.806150  325034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:30.824380  325034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1206 09:11:30.839884  325034 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:30.844302  325034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:30.856629  325034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:30.966687  325034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:30.996223  325034 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473 for IP: 192.168.103.2
	I1206 09:11:30.996245  325034 certs.go:195] generating shared ca certs ...
	I1206 09:11:30.996264  325034 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:30.996406  325034 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:11:30.996468  325034 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:11:30.996483  325034 certs.go:257] generating profile certs ...
	I1206 09:11:30.996558  325034 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.key
	I1206 09:11:30.996581  325034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.crt with IP's: []
	I1206 09:11:31.062308  325034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.crt ...
	I1206 09:11:31.062348  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.crt: {Name:mk55cf7b46b8dd8b3cbb3fa67bb95f8617961c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.062559  325034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.key ...
	I1206 09:11:31.062589  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/client.key: {Name:mk36301b53f85125c72f5348e5024dc93f0e8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.062720  325034 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3
	I1206 09:11:31.063330  325034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1206 09:11:31.185723  325034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3 ...
	I1206 09:11:31.185751  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3: {Name:mk7d08ff49cf9988bb032237e2b85c5e65744033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.191211  325034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3 ...
	I1206 09:11:31.191246  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3: {Name:mke3f76c716b252fbc00c3240ea8229049d5e6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.191400  325034 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt.c30f6bc3 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt
	I1206 09:11:31.191512  325034 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key.c30f6bc3 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key
	I1206 09:11:31.192257  325034 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key
	I1206 09:11:31.192328  325034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt with IP's: []
	I1206 09:11:31.334066  325034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt ...
	I1206 09:11:31.334101  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt: {Name:mk73c3e5699aef5246dce8b7ed48af73e80ff91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.334298  325034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key ...
	I1206 09:11:31.334326  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key: {Name:mkf059d3900cb7c6291e39f777402ea0ddb2f547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:31.334591  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:11:31.334641  325034 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:11:31.334652  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:11:31.334676  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:31.334706  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:31.334741  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:31.334802  325034 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:31.335550  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:31.354565  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:31.372666  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:31.392188  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:11:31.411726  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1206 09:11:31.430108  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:11:31.447713  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:31.466159  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/custom-flannel-646473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:11:31.485056  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:31.505938  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:11:31.526076  325034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:11:31.545459  325034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:31.559506  325034 ssh_runner.go:195] Run: openssl version
	I1206 09:11:31.565625  325034 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.573351  325034 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:31.581306  325034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.585066  325034 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.585123  325034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:31.620499  325034 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:31.628625  325034 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:31.636181  325034 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.643812  325034 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:11:31.651576  325034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.655545  325034 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.655603  325034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:11:31.691185  325034 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:11:31.699696  325034 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:11:31.707937  325034 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.716081  325034 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:11:31.724705  325034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.728932  325034 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.729021  325034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:11:31.764419  325034 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:31.772362  325034 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:31.780173  325034 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:31.783943  325034 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:31.784025  325034 kubeadm.go:401] StartCluster: {Name:custom-flannel-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:31.784142  325034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:31.784185  325034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:31.811269  325034 cri.go:89] found id: ""
	I1206 09:11:31.811334  325034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:31.820214  325034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:31.828604  325034 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:11:31.828659  325034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:31.836742  325034 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:31.836758  325034 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:31.836807  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:31.845880  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:31.845928  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:31.854041  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:31.861825  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:31.861877  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:31.869571  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:31.877222  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:31.877277  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:31.884622  325034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:31.893138  325034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:31.893207  325034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:31.900815  325034 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:11:31.961861  325034 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:11:32.021909  325034 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:30.348389  326267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.500319343s)
	I1206 09:11:30.348423  326267 kic.go:203] duration metric: took 4.500513378s to extract preloaded images to volume ...
	W1206 09:11:30.348522  326267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:11:30.348566  326267 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:11:30.348615  326267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:11:30.426556  326267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-646473 --name enable-default-cni-646473 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646473 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-646473 --network enable-default-cni-646473 --ip 192.168.94.2 --volume enable-default-cni-646473:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:11:30.786861  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Running}}
	I1206 09:11:30.807168  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:30.827821  326267 cli_runner.go:164] Run: docker exec enable-default-cni-646473 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:11:30.877845  326267 oci.go:144] the created container "enable-default-cni-646473" has a running status.
	I1206 09:11:30.877877  326267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa...
	I1206 09:11:30.973283  326267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:11:31.007285  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:31.033714  326267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:11:31.033731  326267 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-646473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:11:31.089079  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:31.108836  326267 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:31.108934  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:31.128967  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:31.129326  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:31.129342  326267 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:31.130141  326267 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55300->127.0.0.1:33128: read: connection reset by peer
	I1206 09:11:34.261175  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646473
	
	I1206 09:11:34.261208  326267 ubuntu.go:182] provisioning hostname "enable-default-cni-646473"
	I1206 09:11:34.261270  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.280624  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:34.280826  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:34.280842  326267 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-646473 && echo "enable-default-cni-646473" | sudo tee /etc/hostname
	I1206 09:11:34.420206  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646473
	
	I1206 09:11:34.420284  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.440172  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:34.440396  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:34.440412  326267 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-646473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-646473/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-646473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:34.572498  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:34.572535  326267 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5617/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5617/.minikube}
	I1206 09:11:34.572555  326267 ubuntu.go:190] setting up certificates
	I1206 09:11:34.572566  326267 provision.go:84] configureAuth start
	I1206 09:11:34.572621  326267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646473
	I1206 09:11:34.594635  326267 provision.go:143] copyHostCerts
	I1206 09:11:34.594705  326267 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem, removing ...
	I1206 09:11:34.594716  326267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem
	I1206 09:11:34.594810  326267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/ca.pem (1082 bytes)
	I1206 09:11:34.594913  326267 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem, removing ...
	I1206 09:11:34.594924  326267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem
	I1206 09:11:34.594960  326267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/cert.pem (1123 bytes)
	I1206 09:11:34.595086  326267 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem, removing ...
	I1206 09:11:34.595099  326267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem
	I1206 09:11:34.595132  326267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5617/.minikube/key.pem (1675 bytes)
	I1206 09:11:34.595185  326267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-646473 san=[127.0.0.1 192.168.94.2 enable-default-cni-646473 localhost minikube]
	I1206 09:11:34.680462  326267 provision.go:177] copyRemoteCerts
	I1206 09:11:34.680535  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:34.680582  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.700580  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:34.801680  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:34.821508  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 09:11:34.839797  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:11:34.857703  326267 provision.go:87] duration metric: took 285.122924ms to configureAuth
	I1206 09:11:34.857743  326267 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:11:34.857898  326267 config.go:182] Loaded profile config "enable-default-cni-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:34.858002  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:34.877503  326267 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:34.877714  326267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 09:11:34.877730  326267 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1206 09:11:33.025877  315313 pod_ready.go:104] pod "coredns-66bc5c9577-54hvq" is not "Ready", error: <nil>
	I1206 09:11:35.025975  315313 pod_ready.go:94] pod "coredns-66bc5c9577-54hvq" is "Ready"
	I1206 09:11:35.026018  315313 pod_ready.go:86] duration metric: took 32.506225301s for pod "coredns-66bc5c9577-54hvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.028723  315313 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.032925  315313 pod_ready.go:94] pod "etcd-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:35.032952  315313 pod_ready.go:86] duration metric: took 4.205718ms for pod "etcd-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.034860  315313 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.038859  315313 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:35.038882  315313 pod_ready.go:86] duration metric: took 3.999393ms for pod "kube-apiserver-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.040968  315313 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.224236  315313 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:35.224269  315313 pod_ready.go:86] duration metric: took 183.248703ms for pod "kube-controller-manager-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.423020  315313 pod_ready.go:83] waiting for pod "kube-proxy-86f62" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:35.823583  315313 pod_ready.go:94] pod "kube-proxy-86f62" is "Ready"
	I1206 09:11:35.823613  315313 pod_ready.go:86] duration metric: took 400.567675ms for pod "kube-proxy-86f62" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:36.023873  315313 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:36.422938  315313 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-213278" is "Ready"
	I1206 09:11:36.422964  315313 pod_ready.go:86] duration metric: took 399.066206ms for pod "kube-scheduler-default-k8s-diff-port-213278" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:11:36.422976  315313 pod_ready.go:40] duration metric: took 33.907075764s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:11:36.470472  315313 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:11:36.472236  315313 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-213278" cluster and "default" namespace by default
	W1206 09:11:36.484004  315313 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 1c8fddb9-6391-4b0d-a230-5577ea41d4f6
	I1206 09:11:35.157873  326267 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:11:35.157900  326267 machine.go:97] duration metric: took 4.049041898s to provisionDockerMachine
	I1206 09:11:35.157912  326267 client.go:176] duration metric: took 9.924823746s to LocalClient.Create
	I1206 09:11:35.157930  326267 start.go:167] duration metric: took 9.924890928s to libmachine.API.Create "enable-default-cni-646473"
	I1206 09:11:35.157940  326267 start.go:293] postStartSetup for "enable-default-cni-646473" (driver="docker")
	I1206 09:11:35.157952  326267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:35.158032  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:35.158080  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.176721  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.272966  326267 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:35.276622  326267 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:11:35.276653  326267 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:11:35.276665  326267 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/addons for local assets ...
	I1206 09:11:35.276720  326267 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5617/.minikube/files for local assets ...
	I1206 09:11:35.276814  326267 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem -> 91582.pem in /etc/ssl/certs
	I1206 09:11:35.276927  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:11:35.284623  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:35.305227  326267 start.go:296] duration metric: took 147.272745ms for postStartSetup
	I1206 09:11:35.305576  326267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646473
	I1206 09:11:35.323549  326267 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/config.json ...
	I1206 09:11:35.323796  326267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:11:35.323832  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.341296  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.432423  326267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:11:35.437365  326267 start.go:128] duration metric: took 10.206572934s to createHost
	I1206 09:11:35.437391  326267 start.go:83] releasing machines lock for "enable-default-cni-646473", held for 10.206704523s
	I1206 09:11:35.437475  326267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646473
	I1206 09:11:35.457029  326267 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:35.457072  326267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:35.457088  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.457167  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:35.479949  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.480513  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:35.630857  326267 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:35.637890  326267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:11:35.674910  326267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:35.679693  326267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:35.679755  326267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:35.705796  326267 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:35.705829  326267 start.go:496] detecting cgroup driver to use...
	I1206 09:11:35.705865  326267 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:11:35.705925  326267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:11:35.722449  326267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:11:35.735090  326267 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:35.735143  326267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:35.752701  326267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:35.771352  326267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:35.871345  326267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:35.975416  326267 docker.go:234] disabling docker service ...
	I1206 09:11:35.975485  326267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:35.996517  326267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:36.010867  326267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:36.098603  326267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:36.185324  326267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:36.198388  326267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:36.213284  326267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:11:36.213341  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.224182  326267 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 09:11:36.224240  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.233449  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.242576  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.251575  326267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:36.259959  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.269003  326267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.282772  326267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:11:36.291693  326267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:36.299605  326267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:36.307406  326267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:36.389548  326267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:11:36.539889  326267 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:11:36.539953  326267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:11:36.544671  326267 start.go:564] Will wait 60s for crictl version
	I1206 09:11:36.544726  326267 ssh_runner.go:195] Run: which crictl
	I1206 09:11:36.548592  326267 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:11:36.576188  326267 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 09:11:36.576273  326267 ssh_runner.go:195] Run: crio --version
	I1206 09:11:36.609401  326267 ssh_runner.go:195] Run: crio --version
	I1206 09:11:36.642100  326267 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 09:11:36.643736  326267 cli_runner.go:164] Run: docker network inspect enable-default-cni-646473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:11:36.663802  326267 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:36.668215  326267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:36.678431  326267 kubeadm.go:884] updating cluster {Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:36.678558  326267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:36.678613  326267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:36.713603  326267 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:36.713622  326267 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:11:36.713662  326267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:36.741694  326267 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:11:36.741714  326267 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:36.741720  326267 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1206 09:11:36.741794  326267 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=enable-default-cni-646473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1206 09:11:36.741858  326267 ssh_runner.go:195] Run: crio config
	I1206 09:11:36.805140  326267 cni.go:84] Creating CNI manager for "bridge"
	I1206 09:11:36.805171  326267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:36.805200  326267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-646473 NodeName:enable-default-cni-646473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:36.805342  326267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-646473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:36.805411  326267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:36.813830  326267 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:36.813893  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:36.822383  326267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1206 09:11:36.836782  326267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:36.854084  326267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 09:11:36.867122  326267 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:36.870961  326267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:36.881408  326267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:36.979649  326267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:37.012718  326267 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473 for IP: 192.168.94.2
	I1206 09:11:37.012743  326267 certs.go:195] generating shared ca certs ...
	I1206 09:11:37.012763  326267 certs.go:227] acquiring lock for ca certs: {Name:mk17147de12041a8e623a2ab3814a5d7ca8ed5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.012921  326267 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key
	I1206 09:11:37.012960  326267 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key
	I1206 09:11:37.012970  326267 certs.go:257] generating profile certs ...
	I1206 09:11:37.013049  326267 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.key
	I1206 09:11:37.013067  326267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.crt with IP's: []
	I1206 09:11:37.055706  326267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.crt ...
	I1206 09:11:37.055731  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.crt: {Name:mk223dd95154e1c1e223ee8518badd993fb018ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.055885  326267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.key ...
	I1206 09:11:37.055899  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/client.key: {Name:mk9a0506762cbc4e8935306519e44ef9164cb98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.056020  326267 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490
	I1206 09:11:37.056045  326267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1206 09:11:37.167684  326267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490 ...
	I1206 09:11:37.167710  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490: {Name:mk102590e72d82fec69700259b31339d27768d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.167875  326267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490 ...
	I1206 09:11:37.167888  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490: {Name:mkf5aa274eb33ca99e49fffc824f65410fc0a3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.167955  326267 certs.go:382] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt.c4ec9490 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt
	I1206 09:11:37.168104  326267 certs.go:386] copying /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key.c4ec9490 -> /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key
	I1206 09:11:37.168186  326267 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key
	I1206 09:11:37.168204  326267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt with IP's: []
	I1206 09:11:37.317186  326267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt ...
	I1206 09:11:37.317211  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt: {Name:mk8e6dff39876a0be77ac8ec49087fba86bdc153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.317405  326267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key ...
	I1206 09:11:37.317427  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key: {Name:mka0f872fa6f9cddce1ccaf5709e1ac6e119f616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:37.317672  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem (1338 bytes)
	W1206 09:11:37.317724  326267 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158_empty.pem, impossibly tiny 0 bytes
	I1206 09:11:37.317740  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:11:37.317773  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:37.317812  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:37.317845  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:37.317912  326267 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem (1708 bytes)
	I1206 09:11:37.318617  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:37.338173  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:37.356420  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:37.378593  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:11:37.403649  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 09:11:37.427513  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:11:37.449864  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:37.468206  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/enable-default-cni-646473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:11:37.492052  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/certs/9158.pem --> /usr/share/ca-certificates/9158.pem (1338 bytes)
	I1206 09:11:37.517440  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/ssl/certs/91582.pem --> /usr/share/ca-certificates/91582.pem (1708 bytes)
	I1206 09:11:37.538862  326267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:37.559407  326267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:37.572101  326267 ssh_runner.go:195] Run: openssl version
	I1206 09:11:37.578286  326267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.585708  326267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:37.593384  326267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.597316  326267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.597368  326267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:37.634059  326267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:37.642324  326267 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:37.650331  326267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.658957  326267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9158.pem /etc/ssl/certs/9158.pem
	I1206 09:11:37.667272  326267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.671120  326267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:36 /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.671177  326267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9158.pem
	I1206 09:11:37.719773  326267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:11:37.728631  326267 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9158.pem /etc/ssl/certs/51391683.0
	I1206 09:11:37.738175  326267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.746662  326267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91582.pem /etc/ssl/certs/91582.pem
	I1206 09:11:37.754766  326267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.758746  326267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:36 /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.758798  326267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91582.pem
	I1206 09:11:37.795391  326267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:37.803899  326267 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91582.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:11:37.812160  326267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:37.816327  326267 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:37.816394  326267 kubeadm.go:401] StartCluster: {Name:enable-default-cni-646473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646473 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:37.816472  326267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:37.816523  326267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:37.849262  326267 cri.go:89] found id: ""
	I1206 09:11:37.849331  326267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:37.857744  326267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:37.866021  326267 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:11:37.866094  326267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:37.874070  326267 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:37.874087  326267 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:37.874154  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:37.881978  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:37.882179  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:37.890221  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:37.898679  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:37.898738  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:37.906799  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:37.915384  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:37.915456  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:37.923113  326267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:37.930955  326267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:37.931013  326267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:37.939270  326267 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:11:37.987210  326267 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:37.987288  326267 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:38.011713  326267 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:11:38.011796  326267 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:11:38.011876  326267 kubeadm.go:319] OS: Linux
	I1206 09:11:38.012004  326267 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:11:38.012110  326267 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:11:38.012197  326267 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:11:38.012279  326267 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:11:38.012363  326267 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:11:38.012445  326267 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:11:38.012504  326267 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:11:38.012542  326267 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:11:38.078802  326267 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:38.078948  326267 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:38.079104  326267 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:38.087892  326267 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:38.090856  326267 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:38.091003  326267 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:38.091107  326267 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:38.330426  326267 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:38.346637  326267 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:38.379498  326267 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:38.775766  326267 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:38.825221  326267 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:38.825504  326267 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-646473 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:11:39.039323  326267 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:39.039540  326267 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-646473 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1206 09:11:39.138562  326267 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:39.382375  326267 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:39.798109  326267 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:39.798197  326267 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:40.110486  326267 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:40.675888  326267 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:40.985066  326267 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:41.322768  326267 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:41.504549  326267 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:41.506591  326267 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:41.510611  326267 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:42.803500  325034 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:42.803630  325034 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:42.803744  325034 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:11:42.803818  325034 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:11:42.803884  325034 kubeadm.go:319] OS: Linux
	I1206 09:11:42.803969  325034 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:11:42.804199  325034 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:11:42.804277  325034 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:11:42.804357  325034 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:11:42.804465  325034 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:11:42.804528  325034 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:11:42.804584  325034 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:11:42.804653  325034 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:11:42.804927  325034 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:42.805245  325034 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:42.805780  325034 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:42.806219  325034 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:42.808060  325034 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:42.808169  325034 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:42.808263  325034 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:42.808901  325034 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:42.809004  325034 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:42.809093  325034 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:42.809162  325034 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:42.809345  325034 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:42.809581  325034 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-646473 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1206 09:11:42.809664  325034 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:42.809861  325034 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-646473 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1206 09:11:42.810144  325034 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:42.810242  325034 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:42.810300  325034 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:42.810378  325034 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:42.810446  325034 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:42.810520  325034 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:42.810589  325034 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:42.810683  325034 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:42.810756  325034 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:42.810935  325034 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:42.811145  325034 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:42.812742  325034 out.go:252]   - Booting up control plane ...
	I1206 09:11:42.812866  325034 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:42.813037  325034 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:42.813140  325034 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:42.813304  325034 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:42.813425  325034 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:42.813553  325034 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:42.813663  325034 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:42.813712  325034 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:42.813875  325034 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:42.814078  325034 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:42.814215  325034 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501160129s
	I1206 09:11:42.814369  325034 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:42.814482  325034 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1206 09:11:42.814615  325034 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:42.814863  325034 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:42.815005  325034 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.491895482s
	I1206 09:11:42.815123  325034 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.995246944s
	I1206 09:11:42.815236  325034 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001551444s
	I1206 09:11:42.815504  325034 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:42.815701  325034 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:42.815798  325034 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:42.816011  325034 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:42.816081  325034 kubeadm.go:319] [bootstrap-token] Using token: 3e6j9h.fmqb8cmf69r2qrmq
	I1206 09:11:42.817497  325034 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:42.817704  325034 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:42.817847  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:42.818022  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:42.818608  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:42.818848  325034 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:42.819071  325034 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:42.819239  325034 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:42.819305  325034 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:42.819375  325034 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:42.819384  325034 kubeadm.go:319] 
	I1206 09:11:42.819471  325034 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:42.819480  325034 kubeadm.go:319] 
	I1206 09:11:42.819599  325034 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:42.819607  325034 kubeadm.go:319] 
	I1206 09:11:42.819644  325034 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:42.819732  325034 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:42.819809  325034 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:42.819817  325034 kubeadm.go:319] 
	I1206 09:11:42.819895  325034 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:42.819903  325034 kubeadm.go:319] 
	I1206 09:11:42.819972  325034 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:42.819978  325034 kubeadm.go:319] 
	I1206 09:11:42.820082  325034 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:42.820197  325034 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:42.820292  325034 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:42.820296  325034 kubeadm.go:319] 
	I1206 09:11:42.820417  325034 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:42.820521  325034 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:42.820526  325034 kubeadm.go:319] 
	I1206 09:11:42.820640  325034 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3e6j9h.fmqb8cmf69r2qrmq \
	I1206 09:11:42.820783  325034 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:11:42.820812  325034 kubeadm.go:319] 	--control-plane 
	I1206 09:11:42.820821  325034 kubeadm.go:319] 
	I1206 09:11:42.820944  325034 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:42.820953  325034 kubeadm.go:319] 
	I1206 09:11:42.821082  325034 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3e6j9h.fmqb8cmf69r2qrmq \
	I1206 09:11:42.821267  325034 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:11:42.821286  325034 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 09:11:42.823450  325034 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1206 09:11:42.824919  325034 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 09:11:42.824977  325034 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1206 09:11:42.830485  325034 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1206 09:11:42.830520  325034 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1206 09:11:42.865690  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 09:11:43.336327  325034 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:43.336404  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-646473 minikube.k8s.io/updated_at=2025_12_06T09_11_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=custom-flannel-646473 minikube.k8s.io/primary=true
	I1206 09:11:43.336404  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:43.464518  325034 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:43.464622  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:43.964752  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:44.465612  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:41.512259  326267 out.go:252]   - Booting up control plane ...
	I1206 09:11:41.512369  326267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:41.512487  326267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:41.513001  326267 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:41.527750  326267 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:41.527861  326267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:41.534667  326267 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:41.534779  326267 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:41.534886  326267 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:41.650684  326267 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:41.650784  326267 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:42.152609  326267 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.962911ms
	I1206 09:11:42.158636  326267 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:42.158755  326267 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1206 09:11:42.158870  326267 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:42.159025  326267 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:44.162088  326267 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.003121619s
	I1206 09:11:44.572709  326267 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.413940205s
	I1206 09:11:46.160573  326267 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001845421s
	I1206 09:11:46.180884  326267 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:46.192419  326267 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:46.202611  326267 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:46.202946  326267 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-646473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:46.212959  326267 kubeadm.go:319] [bootstrap-token] Using token: 3qaobr.awz696n2m6r05jie
	I1206 09:11:44.964883  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:45.465281  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:45.965170  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:46.465441  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:46.965247  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:47.464763  325034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:47.542264  325034 kubeadm.go:1114] duration metric: took 4.205911766s to wait for elevateKubeSystemPrivileges
	I1206 09:11:47.542308  325034 kubeadm.go:403] duration metric: took 15.758285438s to StartCluster
	I1206 09:11:47.542332  325034 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:47.542399  325034 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:11:47.544833  325034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:47.545113  325034 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:11:47.545302  325034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:47.545840  325034 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:47.545613  325034 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:11:47.545904  325034 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-646473"
	I1206 09:11:47.545913  325034 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-646473"
	I1206 09:11:47.545922  325034 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-646473"
	I1206 09:11:47.545926  325034 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-646473"
	I1206 09:11:47.545956  325034 host.go:66] Checking if "custom-flannel-646473" exists ...
	I1206 09:11:47.546409  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:47.546566  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:47.547976  325034 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:47.550274  325034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:46.214472  326267 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:46.214659  326267 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:46.219178  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:46.224673  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:46.227357  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:46.230653  326267 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:46.233552  326267 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:46.566101  326267 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:46.987904  326267 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:47.573363  326267 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:47.573387  326267 kubeadm.go:319] 
	I1206 09:11:47.573458  326267 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:47.573471  326267 kubeadm.go:319] 
	I1206 09:11:47.573562  326267 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:47.573569  326267 kubeadm.go:319] 
	I1206 09:11:47.573602  326267 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:47.573673  326267 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:47.573734  326267 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:47.573744  326267 kubeadm.go:319] 
	I1206 09:11:47.573805  326267 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:47.573811  326267 kubeadm.go:319] 
	I1206 09:11:47.573868  326267 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:47.573873  326267 kubeadm.go:319] 
	I1206 09:11:47.573932  326267 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:47.574051  326267 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:47.574132  326267 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:47.574138  326267 kubeadm.go:319] 
	I1206 09:11:47.574240  326267 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:47.574341  326267 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:47.574352  326267 kubeadm.go:319] 
	I1206 09:11:47.574465  326267 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3qaobr.awz696n2m6r05jie \
	I1206 09:11:47.574621  326267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a \
	I1206 09:11:47.574648  326267 kubeadm.go:319] 	--control-plane 
	I1206 09:11:47.574653  326267 kubeadm.go:319] 
	I1206 09:11:47.574748  326267 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:47.574756  326267 kubeadm.go:319] 
	I1206 09:11:47.574847  326267 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3qaobr.awz696n2m6r05jie \
	I1206 09:11:47.574960  326267 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6138db9408d22606dc34f844c95714ba64451da7b1b46cc93b9f50ff536aa51a 
	I1206 09:11:47.580919  326267 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:11:47.581075  326267 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:47.581263  326267 cni.go:84] Creating CNI manager for "bridge"
	I1206 09:11:47.581566  325034 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:47.582958  326267 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:11:47.582726  325034 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-646473"
	I1206 09:11:47.582771  325034 host.go:66] Checking if "custom-flannel-646473" exists ...
	I1206 09:11:47.583277  325034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:47.583301  325034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:47.583354  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:47.584185  325034 cli_runner.go:164] Run: docker container inspect custom-flannel-646473 --format={{.State.Status}}
	I1206 09:11:47.623763  325034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:47.623788  325034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:47.623866  325034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646473
	I1206 09:11:47.625679  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:47.665792  325034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/custom-flannel-646473/id_rsa Username:docker}
	I1206 09:11:47.722872  325034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:47.759092  325034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:47.774326  325034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:47.800649  325034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:48.016694  325034 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:48.018709  325034 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-646473" to be "Ready" ...
	I1206 09:11:48.256074  325034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:11:48.257399  325034 addons.go:530] duration metric: took 711.783945ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:11:48.522543  325034 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-646473" context rescaled to 1 replicas
	I1206 09:11:47.589097  326267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:11:47.605879  326267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:11:47.632115  326267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:47.632383  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-646473 minikube.k8s.io/updated_at=2025_12_06T09_11_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=enable-default-cni-646473 minikube.k8s.io/primary=true
	I1206 09:11:47.632427  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:47.781240  326267 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:47.781348  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:48.281668  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:48.782228  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:49.282182  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:49.782202  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:50.282183  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:50.781457  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:51.282490  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:51.781624  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:52.281522  326267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:52.358952  326267 kubeadm.go:1114] duration metric: took 4.726734867s to wait for elevateKubeSystemPrivileges
	I1206 09:11:52.359001  326267 kubeadm.go:403] duration metric: took 14.542595305s to StartCluster
	I1206 09:11:52.359025  326267 settings.go:142] acquiring lock: {Name:mkcb7f067d18a0fbb5248388de84c40e0d45204c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:52.359113  326267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:11:52.361428  326267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5617/kubeconfig: {Name:mkcddade18bcaa139d132b4df092f1a427b659db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:52.361719  326267 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:11:52.361898  326267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:52.362185  326267 config.go:182] Loaded profile config "enable-default-cni-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:11:52.362236  326267 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:11:52.362305  326267 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-646473"
	I1206 09:11:52.362323  326267 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-646473"
	I1206 09:11:52.362329  326267 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-646473"
	I1206 09:11:52.362350  326267 host.go:66] Checking if "enable-default-cni-646473" exists ...
	I1206 09:11:52.362351  326267 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-646473"
	I1206 09:11:52.362692  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:52.363542  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:52.363717  326267 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:52.364972  326267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:52.389183  326267 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-646473"
	I1206 09:11:52.389228  326267 host.go:66] Checking if "enable-default-cni-646473" exists ...
	I1206 09:11:52.389784  326267 cli_runner.go:164] Run: docker container inspect enable-default-cni-646473 --format={{.State.Status}}
	I1206 09:11:52.389999  326267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:52.391264  326267 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:52.391291  326267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:52.391346  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:52.426526  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:52.428881  326267 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:52.428940  326267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:52.429050  326267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646473
	I1206 09:11:52.455806  326267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/enable-default-cni-646473/id_rsa Username:docker}
	I1206 09:11:52.493580  326267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:52.534407  326267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:52.551282  326267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:52.589912  326267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:52.782918  326267 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:52.784330  326267 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-646473" to be "Ready" ...
	I1206 09:11:52.839716  326267 node_ready.go:49] node "enable-default-cni-646473" is "Ready"
	I1206 09:11:52.839767  326267 node_ready.go:38] duration metric: took 55.387758ms for node "enable-default-cni-646473" to be "Ready" ...
	I1206 09:11:52.839781  326267 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:52.839876  326267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:52.964786  326267 api_server.go:72] duration metric: took 603.034505ms to wait for apiserver process to appear ...
	I1206 09:11:52.964812  326267 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:11:52.964856  326267 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1206 09:11:52.972591  326267 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1206 09:11:52.973978  326267 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:52.974056  326267 api_server.go:131] duration metric: took 9.235166ms to wait for apiserver health ...
	I1206 09:11:52.974067  326267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:52.975155  326267 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 06 09:11:24 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:24.518489979Z" level=info msg="Started container" PID=1735 containerID=e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper id=589ad6d3-c40a-4000-a238-5e9a96a758eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c54cce5adf6a742189a3b8db528baf390361e40c13d01d5f482ebe1bbbaaac3
	Dec 06 09:11:24 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:24.555509017Z" level=info msg="Removing container: e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e" id=6ba68220-f84a-41c8-b4e8-329b7f357f31 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:11:24 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:24.577169467Z" level=info msg="Removed container e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=6ba68220-f84a-41c8-b4e8-329b7f357f31 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.580245767Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c91b7b83-ab7d-466b-af56-030dfa147cf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.581253993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3b375cdf-52e1-4b5a-9475-060eb27a6654 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.58238506Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=16ffb1a6-2453-4bf2-ac7b-edf727cba1ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.582514041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.58664374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.586775684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9443cd57345740f1f7d3da5a170b730a39fec0d7f900137c3e8deced77db2461/merged/etc/passwd: no such file or directory"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.586797229Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9443cd57345740f1f7d3da5a170b730a39fec0d7f900137c3e8deced77db2461/merged/etc/group: no such file or directory"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.587012941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.614431818Z" level=info msg="Created container 05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8: kube-system/storage-provisioner/storage-provisioner" id=16ffb1a6-2453-4bf2-ac7b-edf727cba1ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.615134476Z" level=info msg="Starting container: 05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8" id=4f283ae0-ff9a-49ff-8de4-f6ab588d1369 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:11:32 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:32.616847123Z" level=info msg="Started container" PID=1753 containerID=05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8 description=kube-system/storage-provisioner/storage-provisioner id=4f283ae0-ff9a-49ff-8de4-f6ab588d1369 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f52e745bc3b32f57b45b7149be036dfb9506e39f5d73e150d0b154b62a941679
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.448692941Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9446302e-3ecd-41b0-ae0d-e10bd5081ec1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.449779697Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1db153fc-72b8-4d1b-a83a-161a28322dfc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.450875416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=2437f75a-1aad-4a79-868c-a5351314e27f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.451041444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.458022006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.458561478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.489539397Z" level=info msg="Created container 1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=2437f75a-1aad-4a79-868c-a5351314e27f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.490293753Z" level=info msg="Starting container: 1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c" id=ccfff5ff-6afb-44d1-a3a3-8495449eaab0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.492588086Z" level=info msg="Started container" PID=1789 containerID=1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper id=ccfff5ff-6afb-44d1-a3a3-8495449eaab0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c54cce5adf6a742189a3b8db528baf390361e40c13d01d5f482ebe1bbbaaac3
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.62876197Z" level=info msg="Removing container: e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14" id=038a55a7-db83-4e0c-b19c-db30aa34aad1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 09:11:48 default-k8s-diff-port-213278 crio[565]: time="2025-12-06T09:11:48.638916579Z" level=info msg="Removed container e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh/dashboard-metrics-scraper" id=038a55a7-db83-4e0c-b19c-db30aa34aad1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1eccc9eb14811       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   2c54cce5adf6a       dashboard-metrics-scraper-6ffb444bf9-676dh             kubernetes-dashboard
	05ac516aec1ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   f52e745bc3b32       storage-provisioner                                    kube-system
	51194e071a8c4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   f92805313c19d       kubernetes-dashboard-855c9754f9-hjxhr                  kubernetes-dashboard
	deb1986899497       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   c9a2cb815668e       coredns-66bc5c9577-54hvq                               kube-system
	88dfd712e3100       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   a510cdba763c5       busybox                                                default
	90f6f1c662ee4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   9af5f030a2006       kindnet-4jw2t                                          kube-system
	79f8e846255f8       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   412f7e2bdf8d8       kube-proxy-86f62                                       kube-system
	cfa9b86f728e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   f52e745bc3b32       storage-provisioner                                    kube-system
	993bd9094e371       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   30b167865f45e       etcd-default-k8s-diff-port-213278                      kube-system
	a151df7271144       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   f6e347e41f3f4       kube-apiserver-default-k8s-diff-port-213278            kube-system
	8fe294be79620       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   4e5fbbcf993f6       kube-controller-manager-default-k8s-diff-port-213278   kube-system
	877ac8d6fa140       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   57c47552e7d48       kube-scheduler-default-k8s-diff-port-213278            kube-system
	
	
	==> coredns [deb19868994976b1519511ddc4ae28885b0e5e36a5be9d305b98fc87796e836e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42062 - 45557 "HINFO IN 112575248579643360.8630264767851062528. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.023676365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-213278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-213278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=default-k8s-diff-port-213278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_09_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:09:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-213278
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:11:31 +0000   Sat, 06 Dec 2025 09:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-213278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                eec647d7-7697-4ad8-a7c7-fd1943fc3364
	  Boot ID:                    84ef1a4e-3ecc-4292-bf21-6bc1abfbe2ad
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-54hvq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-213278                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m23s
	  kube-system                 kindnet-4jw2t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-213278             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-213278    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-86f62                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-213278             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-676dh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hjxhr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  Starting                 52s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m28s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-213278 event: Registered Node default-k8s-diff-port-213278 in Controller
	  Normal  NodeReady                97s                    kubelet          Node default-k8s-diff-port-213278 status is now: NodeReady
	  Normal  Starting                 56s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)      kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)      kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)      kubelet          Node default-k8s-diff-port-213278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-213278 event: Registered Node default-k8s-diff-port-213278 in Controller
	
	
	==> dmesg <==
	[  +4.915906] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 08:30] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.006315] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.022922] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023843] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.023841] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +1.024923] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +2.046747] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +4.031581] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[  +8.063106] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[ +32.252616] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: e2 18 57 b2 51 23 16 7f 31 32 ba 87 08 00
	[Dec 6 09:11] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b7 0a ee d3 79 08 06
	
	
	==> etcd [993bd9094e3710e4afa57b11133e4f8ed540f0bcf8e89c0258b11e42c9e374bc] <==
	{"level":"warn","ts":"2025-12-06T09:11:00.098939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.108304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.116482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.125228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.133043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.142977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.151461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.159256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.170374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.179132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.185984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.206772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.214581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.221768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:00.277940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:11:08.788972Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.079155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597538724368267 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:601 > success:<request_put:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:755 >> failure:<request_range:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:11:08.789324Z","caller":"traceutil/trace.go:172","msg":"trace[1383292509] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"174.216189ms","start":"2025-12-06T09:11:08.615098Z","end":"2025-12-06T09:11:08.789314Z","steps":["trace[1383292509] 'process raft request'  (duration: 174.146228ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:11:08.789730Z","caller":"traceutil/trace.go:172","msg":"trace[1027492406] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"174.506858ms","start":"2025-12-06T09:11:08.614784Z","end":"2025-12-06T09:11:08.789291Z","steps":["trace[1027492406] 'process raft request'  (duration: 54.632736ms)","trace[1027492406] 'compare'  (duration: 118.960804ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:11:09.014298Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.323443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-213278\" limit:1 ","response":"range_response_count:1 size:8241"}
	{"level":"info","ts":"2025-12-06T09:11:09.014610Z","caller":"traceutil/trace.go:172","msg":"trace[1409804178] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-213278; range_end:; response_count:1; response_revision:614; }","duration":"139.679221ms","start":"2025-12-06T09:11:08.874905Z","end":"2025-12-06T09:11:09.014584Z","steps":["trace[1409804178] 'agreement among raft nodes before linearized reading'  (duration: 22.735389ms)","trace[1409804178] 'range keys from in-memory index tree'  (duration: 116.490072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:11:09.015146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.288439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597538724368273 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh.187e954e67f7a91e\" mod_revision:608 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh.187e954e67f7a91e\" value_size:766 lease:499225501869591943 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh.187e954e67f7a91e\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:11:09.015694Z","caller":"traceutil/trace.go:172","msg":"trace[1028879696] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"144.368605ms","start":"2025-12-06T09:11:08.871316Z","end":"2025-12-06T09:11:09.015684Z","steps":["trace[1028879696] 'process raft request'  (duration: 26.43698ms)","trace[1028879696] 'compare'  (duration: 116.151895ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T09:11:09.015564Z","caller":"traceutil/trace.go:172","msg":"trace[231883224] linearizableReadLoop","detail":"{readStateIndex:649; appliedIndex:648; }","duration":"103.847548ms","start":"2025-12-06T09:11:08.911698Z","end":"2025-12-06T09:11:09.015546Z","steps":["trace[231883224] 'read index received'  (duration: 43.92µs)","trace[231883224] 'applied index is now lower than readState.Index'  (duration: 103.802236ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:11:09.015660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.95578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:11:09.017847Z","caller":"traceutil/trace.go:172","msg":"trace[919212663] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:615; }","duration":"106.069712ms","start":"2025-12-06T09:11:08.911693Z","end":"2025-12-06T09:11:09.017763Z","steps":["trace[919212663] 'agreement among raft nodes before linearized reading'  (duration: 103.881071ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:54 up 54 min,  0 user,  load average: 4.11, 3.33, 2.22
	Linux default-k8s-diff-port-213278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90f6f1c662ee4be789481c3d36c939de768e2a68031835acafba34c8bd8c2c0a] <==
	I1206 09:11:02.090848       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 09:11:02.091163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 09:11:02.091318       1 main.go:148] setting mtu 1500 for CNI 
	I1206 09:11:02.091340       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 09:11:02.091364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T09:11:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 09:11:02.297741       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 09:11:02.389216       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 09:11:02.389241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 09:11:02.389780       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1206 09:11:02.691740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 09:11:02.691764       1 metrics.go:72] Registering metrics
	I1206 09:11:02.691842       1 controller.go:711] "Syncing nftables rules"
	I1206 09:11:12.297118       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:12.297191       1 main.go:301] handling current node
	I1206 09:11:22.299148       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:22.299188       1 main.go:301] handling current node
	I1206 09:11:32.297846       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:32.297895       1 main.go:301] handling current node
	I1206 09:11:42.302083       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:42.302131       1 main.go:301] handling current node
	I1206 09:11:52.299141       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 09:11:52.299281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a151df72711445119fa366f7061cd8c8a8baa812129f92483b799ac38a9b7756] <==
	I1206 09:11:00.819622       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:11:00.819768       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:11:00.819928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:11:00.820734       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:11:00.821926       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:11:00.822045       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 09:11:00.822103       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:11:00.822113       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:11:00.822119       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:11:00.822126       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:11:00.823140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:11:00.832056       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:11:00.886239       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:11:00.890479       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:11:01.224817       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:11:01.264715       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:11:01.291171       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:11:01.301293       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:11:01.311045       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:11:01.382250       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.181.213"}
	I1206 09:11:01.400757       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.138.232"}
	I1206 09:11:01.721686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:11:04.442750       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:11:04.541538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:11:04.689659       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8fe294be7962045740259ca379b55feefc319a86bae64f83cf89415bcf9eaea7] <==
	I1206 09:11:04.112408       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:11:04.114654       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:11:04.116928       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:11:04.118418       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:11:04.119697       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:11:04.135475       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:11:04.135609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:11:04.135625       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:11:04.135637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:11:04.135905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:11:04.136027       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:11:04.136535       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:11:04.136574       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:11:04.136595       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:11:04.136630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:11:04.136737       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-213278"
	I1206 09:11:04.136781       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:11:04.141548       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:11:04.151742       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:11:04.153927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:11:04.155100       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:11:04.156304       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:11:04.156933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:11:04.157039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:11:04.157480       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	
	
	==> kube-proxy [79f8e846255f85ab83dd33f39644030d86c3a149164871b704e48bf6ca0888b1] <==
	I1206 09:11:01.881769       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:11:01.962762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:11:02.063828       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:11:02.063872       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 09:11:02.064050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:11:02.086375       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:11:02.086555       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:11:02.095074       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:11:02.095512       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:11:02.095544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:02.096950       1 config.go:200] "Starting service config controller"
	I1206 09:11:02.097189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:11:02.097109       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:11:02.097285       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:11:02.097148       1 config.go:309] "Starting node config controller"
	I1206 09:11:02.097429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:11:02.097593       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:11:02.097165       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:11:02.097737       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:11:02.198407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:11:02.198433       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:11:02.198411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [877ac8d6fa140608aa94c4548bea183ea231d43b34b8e3afdb342cff6d7b7d13] <==
	I1206 09:10:59.431370       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:11:00.747386       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:11:00.747430       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:11:00.747729       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:11:00.747788       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:11:00.786170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:11:00.793684       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:00.801633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:11:00.801734       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:11:00.801798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:11:00.802728       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:11:00.902972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:11:04 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:04.774777     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e257799f-93c1-460e-8143-bc16fc0365fd-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hjxhr\" (UID: \"e257799f-93c1-460e-8143-bc16fc0365fd\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjxhr"
	Dec 06 09:11:08 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:08.502449     732 scope.go:117] "RemoveContainer" containerID="ed3fe257958d9f09849d50439762531b05b359e7d9844efbec7d85ec34bd3680"
	Dec 06 09:11:09 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:09.508069     732 scope.go:117] "RemoveContainer" containerID="ed3fe257958d9f09849d50439762531b05b359e7d9844efbec7d85ec34bd3680"
	Dec 06 09:11:09 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:09.508258     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:09 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:09.508466     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:10 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:10.512676     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:10 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:10.512925     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:12 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:12.531388     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjxhr" podStartSLOduration=1.93236674 podStartE2EDuration="8.531362566s" podCreationTimestamp="2025-12-06 09:11:04 +0000 UTC" firstStartedPulling="2025-12-06 09:11:04.994874721 +0000 UTC m=+6.640120021" lastFinishedPulling="2025-12-06 09:11:11.593870544 +0000 UTC m=+13.239115847" observedRunningTime="2025-12-06 09:11:12.531288994 +0000 UTC m=+14.176534297" watchObservedRunningTime="2025-12-06 09:11:12.531362566 +0000 UTC m=+14.176607871"
	Dec 06 09:11:13 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:13.575267     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:13 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:13.575966     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:24.448154     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:24.554263     732 scope.go:117] "RemoveContainer" containerID="e710ca83dc5f2fcc51f0ff5075e7eb20d88787ad24993402fd52e41d5016153e"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:24.554637     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:24 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:24.554842     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:32 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:32.579794     732 scope.go:117] "RemoveContainer" containerID="cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf"
	Dec 06 09:11:33 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:33.575524     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:33 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:33.575699     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:48.448214     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:48.627471     732 scope.go:117] "RemoveContainer" containerID="e3ad51c0cc34ddc13afba3bbae7cb7af750b76e9870dd240408420cc3d858f14"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: I1206 09:11:48.627696     732 scope.go:117] "RemoveContainer" containerID="1eccc9eb148116b618c472d60e8d051d69d2c2c06572ae67a5fe2cd4f894b03c"
	Dec 06 09:11:48 default-k8s-diff-port-213278 kubelet[732]: E1206 09:11:48.627905     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-676dh_kubernetes-dashboard(58ae439b-0b89-402b-8044-5e71edba3f28)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-676dh" podUID="58ae439b-0b89-402b-8044-5e71edba3f28"
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:11:49 default-k8s-diff-port-213278 systemd[1]: kubelet.service: Consumed 1.785s CPU time.
	
	
	==> kubernetes-dashboard [51194e071a8c47771f617e61bfe1e35cfe1b6d522ef2161e639970de26ba9592] <==
	2025/12/06 09:11:11 Starting overwatch
	2025/12/06 09:11:11 Using namespace: kubernetes-dashboard
	2025/12/06 09:11:11 Using in-cluster config to connect to apiserver
	2025/12/06 09:11:11 Using secret token for csrf signing
	2025/12/06 09:11:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 09:11:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 09:11:11 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 09:11:11 Generating JWE encryption key
	2025/12/06 09:11:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 09:11:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 09:11:11 Initializing JWE encryption key from synchronized object
	2025/12/06 09:11:11 Creating in-cluster Sidecar client
	2025/12/06 09:11:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 09:11:11 Serving insecurely on HTTP port: 9090
	2025/12/06 09:11:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [05ac516aec1abf839ab6aa761275207624150ed381c06c9e2e1154ba617d1fc8] <==
	I1206 09:11:32.631305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:11:32.639460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:11:32.639509       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:11:32.641971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:36.097280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:40.358213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:43.957938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:47.012779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:50.035458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:50.040660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:50.040825       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:11:50.041023       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-213278_1c8b24ce-1322-4565-8a77-43e676b3b964!
	I1206 09:11:50.040974       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"649337db-0a79-4b9c-a481-f9515237bbf3", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-213278_1c8b24ce-1322-4565-8a77-43e676b3b964 became leader
	W1206 09:11:50.043051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:50.050011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:11:50.142022       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-213278_1c8b24ce-1322-4565-8a77-43e676b3b964!
	W1206 09:11:52.055160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:52.059905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:54.064694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:54.069858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cfa9b86f728e7ba4d6d1098b4b2284eb87b413da41766f3282ba776c9808cbcf] <==
	I1206 09:11:01.836278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:11:31.838317       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278: exit status 2 (353.722599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.48s)
E1206 09:12:45.029526    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.035937    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.047355    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.068773    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.110177    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.191471    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.353512    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:45.675075    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:46.317112    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:47.598484    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (355/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.01
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 4.41
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.1
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.82
31 TestOffline 89.47
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 122.51
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 8.42
57 TestAddons/StoppedEnableDisable 16.66
58 TestCertOptions 23.56
59 TestCertExpiration 212.86
61 TestForceSystemdFlag 32.54
62 TestForceSystemdEnv 37.52
67 TestErrorSpam/setup 21.65
68 TestErrorSpam/start 0.65
69 TestErrorSpam/status 0.94
70 TestErrorSpam/pause 5.59
71 TestErrorSpam/unpause 6.44
72 TestErrorSpam/stop 2.62
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 36.21
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.03
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.13
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.55
84 TestFunctional/serial/CacheCmd/cache/add_local 0.89
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 42.03
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.19
95 TestFunctional/serial/LogsFileCmd 1.21
96 TestFunctional/serial/InvalidService 4.59
98 TestFunctional/parallel/ConfigCmd 0.42
99 TestFunctional/parallel/DashboardCmd 5.54
100 TestFunctional/parallel/DryRun 0.42
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 1.12
106 TestFunctional/parallel/ServiceCmdConnect 7.67
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 25.27
110 TestFunctional/parallel/SSHCmd 0.78
111 TestFunctional/parallel/CpCmd 1.7
112 TestFunctional/parallel/MySQL 16.69
113 TestFunctional/parallel/FileSync 0.4
114 TestFunctional/parallel/CertSync 1.95
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
122 TestFunctional/parallel/License 0.23
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.17
124 TestFunctional/parallel/Version/short 0.06
125 TestFunctional/parallel/Version/components 0.5
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.15
131 TestFunctional/parallel/ImageCommands/Setup 0.43
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.03
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.78
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.92
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
137 TestFunctional/parallel/ServiceCmd/List 0.36
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
142 TestFunctional/parallel/ServiceCmd/Format 0.44
143 TestFunctional/parallel/ServiceCmd/URL 0.47
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
149 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.26
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
153 TestFunctional/parallel/ProfileCmd/profile_list 0.51
154 TestFunctional/parallel/MountCmd/any-port 10.83
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
156 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
157 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
161 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
162 TestFunctional/parallel/MountCmd/specific-port 1.87
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 39.08
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.2
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.56
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.85
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.53
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 47.84
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.2
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.22
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.02
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 6.26
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.4
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.16
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.97
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.7
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.19
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 24.44
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.65
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.6
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 15.57
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.24
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.67
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.26
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.08
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.26
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.3
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.99
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.16
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.45
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.18
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.47
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.54
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.19
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.59
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 12.28
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 4.13
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 1.18
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.51
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.7
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.14
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.99
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.74
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.98
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.8
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.63
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.57
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.58
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 154.41
266 TestMultiControlPlane/serial/DeployApp 4.59
267 TestMultiControlPlane/serial/PingHostFromPods 1.04
268 TestMultiControlPlane/serial/AddWorkerNode 53.56
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
271 TestMultiControlPlane/serial/CopyFile 16.91
272 TestMultiControlPlane/serial/StopSecondaryNode 13.84
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.7
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 202.84
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.06
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
279 TestMultiControlPlane/serial/StopCluster 48.05
280 TestMultiControlPlane/serial/RestartCluster 56.17
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
282 TestMultiControlPlane/serial/AddSecondaryNode 56.96
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
288 TestJSONOutput/start/Command 37.37
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.99
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.24
313 TestKicCustomNetwork/create_custom_network 29.74
314 TestKicCustomNetwork/use_default_bridge_network 21.67
315 TestKicExistingNetwork 26.08
316 TestKicCustomSubnet 28.12
317 TestKicStaticIP 26.31
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 43.79
322 TestMountStart/serial/StartWithMountFirst 7.69
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 5
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.68
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.16
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 95.52
334 TestMultiNode/serial/DeployApp2Nodes 3.7
335 TestMultiNode/serial/PingHostFrom2Pods 0.73
336 TestMultiNode/serial/AddNode 23.16
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.65
339 TestMultiNode/serial/CopyFile 9.81
340 TestMultiNode/serial/StopNode 2.27
341 TestMultiNode/serial/StartAfterStop 7.12
342 TestMultiNode/serial/RestartKeepsNodes 78.63
343 TestMultiNode/serial/DeleteNode 5.26
344 TestMultiNode/serial/StopMultiNode 28.47
345 TestMultiNode/serial/RestartMultiNode 51.79
346 TestMultiNode/serial/ValidateNameConflict 24.27
351 TestPreload 98.16
353 TestScheduledStopUnix 99.54
356 TestInsufficientStorage 8.81
357 TestRunningBinaryUpgrade 43.74
359 TestKubernetesUpgrade 298
360 TestMissingContainerUpgrade 86.81
362 TestPause/serial/Start 46.22
363 TestPause/serial/SecondStartNoReconfiguration 6.22
365 TestStoppedBinaryUpgrade/Setup 0.54
366 TestStoppedBinaryUpgrade/Upgrade 310.01
368 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
369 TestNoKubernetes/serial/StartWithK8s 20.34
370 TestNoKubernetes/serial/StartWithStopK8s 15.81
371 TestNoKubernetes/serial/Start 4.1
372 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
373 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
374 TestNoKubernetes/serial/ProfileList 26.73
389 TestNetworkPlugins/group/false 3.42
394 TestStartStop/group/old-k8s-version/serial/FirstStart 49
395 TestNoKubernetes/serial/Stop 2.66
396 TestNoKubernetes/serial/StartNoArgs 6.4
397 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
399 TestStartStop/group/no-preload/serial/FirstStart 45.92
400 TestStartStop/group/old-k8s-version/serial/DeployApp 8.25
402 TestStartStop/group/old-k8s-version/serial/Stop 16.18
403 TestStartStop/group/no-preload/serial/DeployApp 9.26
405 TestStartStop/group/no-preload/serial/Stop 18.11
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
407 TestStartStop/group/old-k8s-version/serial/SecondStart 45.68
408 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
409 TestStartStop/group/no-preload/serial/SecondStart 27.02
410 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
411 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
413 TestStartStop/group/embed-certs/serial/FirstStart 43.76
414 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
415 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
417 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
420 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.48
421 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
424 TestStartStop/group/newest-cni/serial/FirstStart 24.63
425 TestNetworkPlugins/group/auto/Start 43.14
426 TestStartStop/group/newest-cni/serial/DeployApp 0
428 TestStartStop/group/embed-certs/serial/DeployApp 7.26
429 TestStartStop/group/newest-cni/serial/Stop 2.44
430 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/newest-cni/serial/SecondStart 10.9
433 TestStartStop/group/embed-certs/serial/Stop 17.42
434 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
435 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
436 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
438 TestNetworkPlugins/group/kindnet/Start 42.58
439 TestNetworkPlugins/group/auto/KubeletFlags 0.31
440 TestNetworkPlugins/group/auto/NetCatPod 10.21
441 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
442 TestStartStop/group/embed-certs/serial/SecondStart 50.62
443 TestNetworkPlugins/group/auto/DNS 0.16
444 TestNetworkPlugins/group/auto/Localhost 0.12
445 TestNetworkPlugins/group/auto/HairPin 0.13
446 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
448 TestStartStop/group/default-k8s-diff-port/serial/Stop 19.55
449 TestNetworkPlugins/group/calico/Start 44.65
450 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
451 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
452 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.21
453 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
454 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
455 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
456 TestNetworkPlugins/group/kindnet/DNS 0.16
457 TestNetworkPlugins/group/kindnet/Localhost 0.13
458 TestNetworkPlugins/group/kindnet/HairPin 0.13
459 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
460 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
462 TestNetworkPlugins/group/custom-flannel/Start 46.37
463 TestNetworkPlugins/group/calico/ControllerPod 6.01
464 TestNetworkPlugins/group/enable-default-cni/Start 61.62
465 TestNetworkPlugins/group/calico/KubeletFlags 0.31
466 TestNetworkPlugins/group/calico/NetCatPod 10.2
467 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
468 TestNetworkPlugins/group/calico/DNS 0.12
469 TestNetworkPlugins/group/calico/Localhost 0.09
470 TestNetworkPlugins/group/calico/HairPin 0.09
471 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
472 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
474 TestNetworkPlugins/group/flannel/Start 51.03
475 TestNetworkPlugins/group/bridge/Start 64.44
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
478 TestNetworkPlugins/group/custom-flannel/DNS 0.11
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
486 TestNetworkPlugins/group/flannel/ControllerPod 6.01
487 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
488 TestNetworkPlugins/group/flannel/NetCatPod 8.17
489 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
490 TestNetworkPlugins/group/bridge/NetCatPod 7.23
491 TestNetworkPlugins/group/flannel/DNS 0.17
492 TestNetworkPlugins/group/flannel/Localhost 0.14
493 TestNetworkPlugins/group/flannel/HairPin 0.13
494 TestNetworkPlugins/group/bridge/DNS 0.12
495 TestNetworkPlugins/group/bridge/Localhost 0.11
496 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.007461517s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 08:28:02.928288    9158 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1206 08:28:02.928406    9158 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319272
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319272: exit status 85 (72.99899ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-319272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-319272 │ jenkins │ v1.37.0 │ 06 Dec 25 08:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:27:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:27:57.974198    9170 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:27:57.974377    9170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:27:57.974385    9170 out.go:374] Setting ErrFile to fd 2...
	I1206 08:27:57.974389    9170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:27:57.974558    9170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	W1206 08:27:57.974675    9170 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22049-5617/.minikube/config/config.json: open /home/jenkins/minikube-integration/22049-5617/.minikube/config/config.json: no such file or directory
	I1206 08:27:57.975143    9170 out.go:368] Setting JSON to true
	I1206 08:27:57.975967    9170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":629,"bootTime":1765009049,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:27:57.976053    9170 start.go:143] virtualization: kvm guest
	I1206 08:27:57.979896    9170 out.go:99] [download-only-319272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1206 08:27:57.980021    9170 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 08:27:57.980062    9170 notify.go:221] Checking for updates...
	I1206 08:27:57.981295    9170 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:27:57.982580    9170 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:27:57.983801    9170 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:27:57.988497    9170 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:27:57.989600    9170 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 08:27:57.991592    9170 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:27:57.991843    9170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:27:58.016320    9170 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:27:58.016405    9170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:27:58.233745    9170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-06 08:27:58.22445993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:27:58.233858    9170 docker.go:319] overlay module found
	I1206 08:27:58.235236    9170 out.go:99] Using the docker driver based on user configuration
	I1206 08:27:58.235263    9170 start.go:309] selected driver: docker
	I1206 08:27:58.235269    9170 start.go:927] validating driver "docker" against <nil>
	I1206 08:27:58.235345    9170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:27:58.294149    9170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-06 08:27:58.284081958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:27:58.294291    9170 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:27:58.294752    9170 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 08:27:58.294910    9170 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:27:58.296541    9170 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-319272 host does not exist
	  To start a cluster, run: "minikube start -p download-only-319272"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-319272
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-291174 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-291174 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.409614001s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 08:28:07.781352    9158 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 08:28:07.781393    9158 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-291174
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-291174: exit status 85 (75.015358ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-319272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-319272 │ jenkins │ v1.37.0 │ 06 Dec 25 08:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-319272                                                                                                                                                   │ download-only-319272 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-291174 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-291174 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:03.424634    9529 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:03.424761    9529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:03.424769    9529 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:03.424774    9529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:03.424969    9529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:28:03.425435    9529 out.go:368] Setting JSON to true
	I1206 08:28:03.426224    9529 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":634,"bootTime":1765009049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:03.426282    9529 start.go:143] virtualization: kvm guest
	I1206 08:28:03.428124    9529 out.go:99] [download-only-291174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:28:03.428275    9529 notify.go:221] Checking for updates...
	I1206 08:28:03.429530    9529 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:03.430905    9529 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:03.432242    9529 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:28:03.433447    9529 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:28:03.434688    9529 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 08:28:03.437218    9529 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:03.437501    9529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:03.462757    9529 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:28:03.462835    9529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:03.519209    9529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-06 08:28:03.509520103 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:03.519301    9529 docker.go:319] overlay module found
	I1206 08:28:03.521034    9529 out.go:99] Using the docker driver based on user configuration
	I1206 08:28:03.521061    9529 start.go:309] selected driver: docker
	I1206 08:28:03.521069    9529 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:03.521166    9529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:03.574897    9529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-06 08:28:03.565976375 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:03.575072    9529 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:03.575563    9529 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 08:28:03.575689    9529 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:03.577272    9529 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-291174 host does not exist
	  To start a cluster, run: "minikube start -p download-only-291174"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-291174
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-815139 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-815139 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.097457194s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 08:28:11.323690    9158 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1206 08:28:11.323725    9158 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-815139
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-815139: exit status 85 (74.854955ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-319272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-319272 │ jenkins │ v1.37.0 │ 06 Dec 25 08:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-319272                                                                                                                                                          │ download-only-319272 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-291174 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-291174 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-291174                                                                                                                                                          │ download-only-291174 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-815139 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-815139 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:08.277169    9881 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:08.277393    9881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:08.277403    9881 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:08.277407    9881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:08.277574    9881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:28:08.278024    9881 out.go:368] Setting JSON to true
	I1206 08:28:08.278856    9881 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":639,"bootTime":1765009049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:08.278909    9881 start.go:143] virtualization: kvm guest
	I1206 08:28:08.280831    9881 out.go:99] [download-only-815139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:28:08.280960    9881 notify.go:221] Checking for updates...
	I1206 08:28:08.282391    9881 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:08.283770    9881 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:08.285115    9881 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:28:08.286343    9881 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:28:08.287703    9881 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 08:28:08.290090    9881 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:08.290336    9881 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:08.312377    9881 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:28:08.312464    9881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:08.374619    9881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 08:28:08.363861372 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:08.374712    9881 docker.go:319] overlay module found
	I1206 08:28:08.376187    9881 out.go:99] Using the docker driver based on user configuration
	I1206 08:28:08.376214    9881 start.go:309] selected driver: docker
	I1206 08:28:08.376220    9881 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:08.376287    9881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:08.432925    9881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 08:28:08.424316145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:28:08.433191    9881 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:08.433680    9881 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 08:28:08.433806    9881 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:08.435546    9881 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-815139 host does not exist
	  To start a cluster, run: "minikube start -p download-only-815139"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-815139
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-857088 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-857088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-857088
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 08:28:12.585434    9158 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-791651 --alsologtostderr --binary-mirror http://127.0.0.1:42485 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-791651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-791651
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (89.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-829666 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-829666 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m23.115182535s)
helpers_test.go:175: Cleaning up "offline-crio-829666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-829666
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-829666: (6.354294698s)
--- PASS: TestOffline (89.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-765040
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-765040: exit status 85 (63.059025ms)

                                                
                                                
-- stdout --
	* Profile "addons-765040" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-765040"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-765040
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-765040: exit status 85 (63.930298ms)

                                                
                                                
-- stdout --
	* Profile "addons-765040" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-765040"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (122.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-765040 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-765040 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.505745331s)
--- PASS: TestAddons/Setup (122.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-765040 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-765040 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-765040 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-765040 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [956ac8b4-0ec3-4d79-a354-886ccc9b3353] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [956ac8b4-0ec3-4d79-a354-886ccc9b3353] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002995479s
addons_test.go:694: (dbg) Run:  kubectl --context addons-765040 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-765040 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-765040 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-765040
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-765040: (16.378268419s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-765040
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-765040
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-765040
--- PASS: TestAddons/StoppedEnableDisable (16.66s)

                                                
                                    
x
+
TestCertOptions (23.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-011599 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-011599 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.062114644s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-011599 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-011599 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-011599 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-011599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-011599
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-011599: (2.739094996s)
--- PASS: TestCertOptions (23.56s)

                                                
                                    
x
+
TestCertExpiration (212.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-006207 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-006207 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.771698402s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-006207 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-006207 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (4.688615882s)
helpers_test.go:175: Cleaning up "cert-expiration-006207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-006207
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-006207: (2.393473861s)
--- PASS: TestCertExpiration (212.86s)

                                                
                                    
x
+
TestForceSystemdFlag (32.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-124894 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1206 09:03:01.705165    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-124894 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.509556416s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-124894 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-124894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-124894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-124894: (2.686757638s)
--- PASS: TestForceSystemdFlag (32.54s)

                                                
                                    
x
+
TestForceSystemdEnv (37.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-894703 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-894703 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.954334624s)
helpers_test.go:175: Cleaning up "force-systemd-env-894703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-894703
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-894703: (4.56140088s)
--- PASS: TestForceSystemdEnv (37.52s)

                                                
                                    
x
+
TestErrorSpam/setup (21.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-654509 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-654509 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-654509 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-654509 --driver=docker  --container-runtime=crio: (21.652183827s)
--- PASS: TestErrorSpam/setup (21.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (5.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause: exit status 80 (2.153757832s)

                                                
                                                
-- stdout --
	* Pausing node nospam-654509 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:33:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause: exit status 80 (1.665804918s)

                                                
                                                
-- stdout --
	* Pausing node nospam-654509 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:33:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause: exit status 80 (1.76728353s)

                                                
                                                
-- stdout --
	* Pausing node nospam-654509 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:33:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause: exit status 80 (2.322253771s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-654509 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:33:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause: exit status 80 (2.256347004s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-654509 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:33:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause: exit status 80 (1.85682463s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-654509 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T08:33:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.44s)

                                                
                                    
x
+
TestErrorSpam/stop (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 stop: (2.411570491s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654509 --log_dir /tmp/nospam-654509 stop
--- PASS: TestErrorSpam/stop (2.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/test/nested/copy/9158/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012975 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-012975 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.212343073s)
--- PASS: TestFunctional/serial/StartWithProxy (36.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 08:34:42.199271    9158 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012975 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-012975 --alsologtostderr -v=8: (6.026289121s)
functional_test.go:678: soft start took 6.026993453s for "functional-012975" cluster.
I1206 08:34:48.225894    9158 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-012975 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-012975 /tmp/TestFunctionalserialCacheCmdcacheadd_local1675073455/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cache add minikube-local-cache-test:functional-012975
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cache delete minikube-local-cache-test:functional-012975
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-012975
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.892803ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 kubectl -- --context functional-012975 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-012975 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012975 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 08:35:16.552067    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:16.565229    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:16.577966    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:16.599393    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:16.640804    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:16.722222    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:16.883766    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:17.205448    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:17.847199    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:19.129034    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:21.690539    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:35:26.812078    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-012975 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.027109238s)
functional_test.go:776: restart took 42.027252906s for "functional-012975" cluster.
I1206 08:35:36.158792    9158 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (42.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-012975 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 logs
E1206 08:35:37.053947    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-012975 logs: (1.187332349s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 logs --file /tmp/TestFunctionalserialLogsFileCmd1471768912/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-012975 logs --file /tmp/TestFunctionalserialLogsFileCmd1471768912/001/logs.txt: (1.210782285s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-012975 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-012975
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-012975: exit status 115 (335.990525ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32088 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-012975 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-012975 delete -f testdata/invalidsvc.yaml: (1.084254151s)
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 config get cpus: exit status 14 (86.121261ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 config get cpus: exit status 14 (66.840387ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012975 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012975 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 47367: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.562769ms)

                                                
                                                
-- stdout --
	* [functional-012975] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:36:01.367272   46827 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:36:01.367545   46827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:36:01.367556   46827 out.go:374] Setting ErrFile to fd 2...
	I1206 08:36:01.367561   46827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:36:01.367748   46827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:36:01.368183   46827 out.go:368] Setting JSON to false
	I1206 08:36:01.369199   46827 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1112,"bootTime":1765009049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:36:01.369254   46827 start.go:143] virtualization: kvm guest
	I1206 08:36:01.372361   46827 out.go:179] * [functional-012975] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:36:01.374358   46827 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:36:01.374388   46827 notify.go:221] Checking for updates...
	I1206 08:36:01.376560   46827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:36:01.377796   46827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:36:01.378873   46827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:36:01.380021   46827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:36:01.381274   46827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:36:01.383140   46827 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:36:01.383718   46827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:36:01.412915   46827 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:36:01.413115   46827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:36:01.482285   46827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 08:36:01.47019761 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:36:01.482502   46827 docker.go:319] overlay module found
	I1206 08:36:01.484915   46827 out.go:179] * Using the docker driver based on existing profile
	I1206 08:36:01.486208   46827 start.go:309] selected driver: docker
	I1206 08:36:01.486226   46827 start.go:927] validating driver "docker" against &{Name:functional-012975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-012975 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:36:01.486349   46827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:36:01.488448   46827 out.go:203] 
	W1206 08:36:01.489744   46827 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 08:36:01.490904   46827 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012975 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.221559ms)

                                                
                                                
-- stdout --
	* [functional-012975] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:36:01.793812   47070 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:36:01.793921   47070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:36:01.793927   47070 out.go:374] Setting ErrFile to fd 2...
	I1206 08:36:01.793935   47070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:36:01.794267   47070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:36:01.794785   47070 out.go:368] Setting JSON to false
	I1206 08:36:01.795786   47070 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1113,"bootTime":1765009049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:36:01.795852   47070 start.go:143] virtualization: kvm guest
	I1206 08:36:01.797665   47070 out.go:179] * [functional-012975] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 08:36:01.799337   47070 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:36:01.799343   47070 notify.go:221] Checking for updates...
	I1206 08:36:01.801166   47070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:36:01.803115   47070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:36:01.804473   47070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:36:01.805749   47070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:36:01.807591   47070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:36:01.810151   47070 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:36:01.810758   47070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:36:01.836255   47070 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:36:01.836357   47070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:36:01.900296   47070 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 08:36:01.889306545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:36:01.900420   47070 docker.go:319] overlay module found
	I1206 08:36:01.902708   47070 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 08:36:01.903977   47070 start.go:309] selected driver: docker
	I1206 08:36:01.904007   47070 start.go:927] validating driver "docker" against &{Name:functional-012975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-012975 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:36:01.904129   47070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:36:01.906168   47070 out.go:203] 
	W1206 08:36:01.907747   47070 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 08:36:01.909124   47070 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-012975 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-012975 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-5hqgl" [c00208c5-997a-422f-a546-9d66847a6d31] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-5hqgl" [c00208c5-997a-422f-a546-9d66847a6d31] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004923478s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32704
functional_test.go:1680: http://192.168.49.2:32704: success! body:
Request served by hello-node-connect-7d85dfc575-5hqgl

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32704
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a08c1478-cb61-461a-9dcc-4becca4adb9d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003761022s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-012975 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-012975 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-012975 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-012975 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9edaa63f-a9e8-43e0-8f4c-83e563aa5f6f] Pending
helpers_test.go:352: "sp-pod" [9edaa63f-a9e8-43e0-8f4c-83e563aa5f6f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9edaa63f-a9e8-43e0-8f4c-83e563aa5f6f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004701768s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-012975 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-012975 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-012975 apply -f testdata/storage-provisioner/pod.yaml
I1206 08:36:00.387214    9158 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5fb37206-9de1-4cdd-b2c4-ee19a3686206] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5fb37206-9de1-4cdd-b2c4-ee19a3686206] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003702742s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-012975 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.27s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh -n functional-012975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cp functional-012975:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1358816605/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh -n functional-012975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh -n functional-012975 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-012975 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-4gt7m" [d03e2e4e-845d-4b1b-b43a-80bfce3c72f0] Pending
helpers_test.go:352: "mysql-5bb876957f-4gt7m" [d03e2e4e-845d-4b1b-b43a-80bfce3c72f0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-4gt7m" [d03e2e4e-845d-4b1b-b43a-80bfce3c72f0] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 11.003865221s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;": exit status 1 (126.275583ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:36:05.759946    9158 retry.go:31] will retry after 1.130312566s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;": exit status 1 (157.80776ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:36:07.049230    9158 retry.go:31] will retry after 2.094765108s: exit status 1
2025/12/06 08:36:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1812: (dbg) Run:  kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;": exit status 1 (86.173611ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:36:09.231328    9158 retry.go:31] will retry after 1.82154182s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-012975 exec mysql-5bb876957f-4gt7m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (16.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9158/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /etc/test/nested/copy/9158/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9158.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /etc/ssl/certs/9158.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9158.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /usr/share/ca-certificates/9158.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/91582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /etc/ssl/certs/91582.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/91582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /usr/share/ca-certificates/91582.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-012975 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh "sudo systemctl is-active docker": exit status 1 (271.205695ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh "sudo systemctl is-active containerd": exit status 1 (267.388825ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-012975 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-012975 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zz2dl" [7a06efcf-e929-4aa9-be53-95007da0d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-zz2dl" [7a06efcf-e929-4aa9-be53-95007da0d7a2] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004422456s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012975 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-012975
localhost/kicbase/echo-server:functional-012975
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012975 image ls --format short --alsologtostderr:
I1206 08:36:08.066755   49290 out.go:360] Setting OutFile to fd 1 ...
I1206 08:36:08.067107   49290 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.067120   49290 out.go:374] Setting ErrFile to fd 2...
I1206 08:36:08.067127   49290 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.067464   49290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:36:08.068253   49290 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.068404   49290 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.069027   49290 cli_runner.go:164] Run: docker container inspect functional-012975 --format={{.State.Status}}
I1206 08:36:08.090808   49290 ssh_runner.go:195] Run: systemctl --version
I1206 08:36:08.090872   49290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012975
I1206 08:36:08.112317   49290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-012975/id_rsa Username:docker}
I1206 08:36:08.208401   49290 ssh_runner.go:195] Run: sudo crictl images --output json
W1206 08:36:08.243544   49290 root.go:91] failed to log command end to audit: failed to find a log row with id equals to e572c52c-0f0c-44b7-afed-59bd6ce60713
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012975 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-012975  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-012975  │ 8367287732a5a │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012975 image ls --format table --alsologtostderr:
I1206 08:36:08.555648   49551 out.go:360] Setting OutFile to fd 1 ...
I1206 08:36:08.555884   49551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.555894   49551 out.go:374] Setting ErrFile to fd 2...
I1206 08:36:08.555898   49551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.556176   49551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:36:08.556742   49551 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.556831   49551 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.557241   49551 cli_runner.go:164] Run: docker container inspect functional-012975 --format={{.State.Status}}
I1206 08:36:08.577920   49551 ssh_runner.go:195] Run: systemctl --version
I1206 08:36:08.577970   49551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012975
I1206 08:36:08.596844   49551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-012975/id_rsa Username:docker}
I1206 08:36:08.689680   49551 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012975 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-012975"],"size":"4944818"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1
d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4
e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/k
ube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898b
bb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"8367287732a5a79c841783f40fbe437c2ebc772f18dfc2a69b3a4fa00f797843","repoDigests":["localhost/minikube-local-cache-test@sha256:8dad04c3fc1ea2d9d960bc6af32a3e9b213baf92fc6cf7144c9c4
5d33a49bdd6"],"repoTags":["localhost/minikube-local-cache-test:functional-012975"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c
52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805dd
caaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012975 image ls --format json --alsologtostderr:
I1206 08:36:08.554020   49550 out.go:360] Setting OutFile to fd 1 ...
I1206 08:36:08.554183   49550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.554196   49550 out.go:374] Setting ErrFile to fd 2...
I1206 08:36:08.554202   49550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.554448   49550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:36:08.555013   49550 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.555140   49550 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.555594   49550 cli_runner.go:164] Run: docker container inspect functional-012975 --format={{.State.Status}}
I1206 08:36:08.577188   49550 ssh_runner.go:195] Run: systemctl --version
I1206 08:36:08.577233   49550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012975
I1206 08:36:08.597154   49550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-012975/id_rsa Username:docker}
I1206 08:36:08.690317   49550 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012975 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-012975
size: "4944818"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8367287732a5a79c841783f40fbe437c2ebc772f18dfc2a69b3a4fa00f797843
repoDigests:
- localhost/minikube-local-cache-test@sha256:8dad04c3fc1ea2d9d960bc6af32a3e9b213baf92fc6cf7144c9c45d33a49bdd6
repoTags:
- localhost/minikube-local-cache-test:functional-012975
size: "3330"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012975 image ls --format yaml --alsologtostderr:
I1206 08:36:08.305121   49449 out.go:360] Setting OutFile to fd 1 ...
I1206 08:36:08.305388   49449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.305398   49449 out.go:374] Setting ErrFile to fd 2...
I1206 08:36:08.305404   49449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.305634   49449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:36:08.306280   49449 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.306391   49449 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.306813   49449 cli_runner.go:164] Run: docker container inspect functional-012975 --format={{.State.Status}}
I1206 08:36:08.324309   49449 ssh_runner.go:195] Run: systemctl --version
I1206 08:36:08.324350   49449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012975
I1206 08:36:08.342674   49449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-012975/id_rsa Username:docker}
I1206 08:36:08.440083   49449 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh pgrep buildkitd: exit status 1 (300.36094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image build -t localhost/my-image:functional-012975 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-012975 image build -t localhost/my-image:functional-012975 testdata/build --alsologtostderr: (2.623227001s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012975 image build -t localhost/my-image:functional-012975 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 25e9475cd3f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-012975
--> 5e3b5e97e9a
Successfully tagged localhost/my-image:functional-012975
5e3b5e97e9ad74ad547f58b85953e7106005b48fda711ad8dcaa6cc8e4462443
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012975 image build -t localhost/my-image:functional-012975 testdata/build --alsologtostderr:
I1206 08:36:08.782553   49748 out.go:360] Setting OutFile to fd 1 ...
I1206 08:36:08.782690   49748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.782701   49748 out.go:374] Setting ErrFile to fd 2...
I1206 08:36:08.782707   49748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:36:08.782918   49748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:36:08.783500   49748 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.785315   49748 config.go:182] Loaded profile config "functional-012975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:36:08.785938   49748 cli_runner.go:164] Run: docker container inspect functional-012975 --format={{.State.Status}}
I1206 08:36:08.811489   49748 ssh_runner.go:195] Run: systemctl --version
I1206 08:36:08.811547   49748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012975
I1206 08:36:08.836240   49748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-012975/id_rsa Username:docker}
I1206 08:36:08.939780   49748 build_images.go:162] Building image from path: /tmp/build.1506348585.tar
I1206 08:36:08.939843   49748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 08:36:08.948476   49748 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1506348585.tar
I1206 08:36:08.952320   49748 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1506348585.tar: stat -c "%s %y" /var/lib/minikube/build/build.1506348585.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1506348585.tar': No such file or directory
I1206 08:36:08.952351   49748 ssh_runner.go:362] scp /tmp/build.1506348585.tar --> /var/lib/minikube/build/build.1506348585.tar (3072 bytes)
I1206 08:36:08.970640   49748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1506348585
I1206 08:36:08.979022   49748 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1506348585 -xf /var/lib/minikube/build/build.1506348585.tar
I1206 08:36:08.987553   49748 crio.go:315] Building image: /var/lib/minikube/build/build.1506348585
I1206 08:36:08.987649   49748 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-012975 /var/lib/minikube/build/build.1506348585 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 08:36:11.314904   49748 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-012975 /var/lib/minikube/build/build.1506348585 --cgroup-manager=cgroupfs: (2.327230066s)
I1206 08:36:11.314958   49748 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1506348585
I1206 08:36:11.323151   49748 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1506348585.tar
I1206 08:36:11.330917   49748 build_images.go:218] Built localhost/my-image:functional-012975 from /tmp/build.1506348585.tar
I1206 08:36:11.330951   49748 build_images.go:134] succeeded building to: functional-012975
I1206 08:36:11.330956   49748 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-012975
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image load --daemon kicbase/echo-server:functional-012975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image load --daemon kicbase/echo-server:functional-012975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-012975
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image load --daemon kicbase/echo-server:functional-012975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image save kicbase/echo-server:functional-012975 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image rm kicbase/echo-server:functional-012975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls
I1206 08:35:49.614026    9158 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 service list -o json
functional_test.go:1504: Took "322.299669ms" to run "out/minikube-linux-amd64 -p functional-012975 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30851
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-012975
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 image save --daemon kicbase/echo-server:functional-012975 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-012975
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30851
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012975 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012975 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-012975 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 43918: os: process already finished
helpers_test.go:519: unable to terminate pid 43545: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-012975 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012975 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-012975 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b6ac3e18-9785-4b27-aaa3-f6619147babd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b6ac3e18-9785-4b27-aaa3-f6619147babd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003964511s
I1206 08:36:01.140413    9158 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "438.905772ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.181933ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdany-port573184407/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765010153688779883" to /tmp/TestFunctionalparallelMountCmdany-port573184407/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765010153688779883" to /tmp/TestFunctionalparallelMountCmdany-port573184407/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765010153688779883" to /tmp/TestFunctionalparallelMountCmdany-port573184407/001/test-1765010153688779883
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.581926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:35:54.027899    9158 retry.go:31] will retry after 334.340726ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 08:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 08:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 08:35 test-1765010153688779883
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh cat /mount-9p/test-1765010153688779883
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-012975 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [20ca2cca-56bc-424d-93dd-e6cbc5c738b9] Pending
helpers_test.go:352: "busybox-mount" [20ca2cca-56bc-424d-93dd-e6cbc5c738b9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1206 08:35:57.535871    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [20ca2cca-56bc-424d-93dd-e6cbc5c738b9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [20ca2cca-56bc-424d-93dd-e6cbc5c738b9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.002907676s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-012975 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdany-port573184407/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "386.70245ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.980519ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-012975 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.239.86 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-012975 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdspecific-port744784754/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.073958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:36:04.844434    9158 retry.go:31] will retry after 358.395938ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdspecific-port744784754/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh "sudo umount -f /mount-9p": exit status 1 (296.565782ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-012975 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdspecific-port744784754/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2986112912/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2986112912/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2986112912/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T" /mount1: exit status 1 (436.703772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:36:06.829091    9158 retry.go:31] will retry after 683.785202ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012975 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-012975 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2986112912/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2986112912/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2986112912/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-012975
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-012975
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-012975
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-5617/.minikube/files/etc/test/nested/copy/9158/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (39.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479582 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 08:36:38.498476    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-479582 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (39.081384123s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (39.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 08:36:54.430558    9158 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479582 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-479582 --alsologtostderr -v=8: (6.20202254s)
functional_test.go:678: soft start took 6.202501219s for "functional-479582" cluster.
I1206 08:37:00.633099    9158 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-479582 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1110033040/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cache add minikube-local-cache-test:functional-479582
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cache delete minikube-local-cache-test:functional-479582
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-479582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.766582ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 kubectl -- --context functional-479582 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-479582 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (47.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479582 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-479582 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.843110084s)
functional_test.go:776: restart took 47.843217021s for "functional-479582" cluster.
I1206 08:37:54.314831    9158 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (47.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-479582 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 logs: (1.200024765s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs912697278/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs912697278/001/logs.txt: (1.220778015s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-479582 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-479582
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-479582: exit status 115 (346.276118ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32198 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-479582 delete -f testdata/invalidsvc.yaml
E1206 08:38:00.420626    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 config get cpus: exit status 14 (80.221909ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 config get cpus: exit status 14 (84.183914ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-479582 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-479582 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 65591: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-479582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (179.30447ms)

                                                
                                                
-- stdout --
	* [functional-479582] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:38:22.730314   65135 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:38:22.730583   65135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:22.730594   65135 out.go:374] Setting ErrFile to fd 2...
	I1206 08:38:22.730599   65135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:22.730823   65135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:38:22.731253   65135 out.go:368] Setting JSON to false
	I1206 08:38:22.732387   65135 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1254,"bootTime":1765009049,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:38:22.732465   65135 start.go:143] virtualization: kvm guest
	I1206 08:38:22.734426   65135 out.go:179] * [functional-479582] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:38:22.736582   65135 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:38:22.736597   65135 notify.go:221] Checking for updates...
	I1206 08:38:22.741649   65135 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:38:22.743294   65135 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:38:22.744949   65135 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:38:22.746587   65135 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:38:22.748285   65135 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:38:22.750274   65135 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 08:38:22.750855   65135 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:38:22.776612   65135 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:38:22.776708   65135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:38:22.839270   65135 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 08:38:22.829868722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:38:22.839407   65135 docker.go:319] overlay module found
	I1206 08:38:22.841603   65135 out.go:179] * Using the docker driver based on existing profile
	I1206 08:38:22.843003   65135 start.go:309] selected driver: docker
	I1206 08:38:22.843018   65135 start.go:927] validating driver "docker" against &{Name:functional-479582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-479582 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:38:22.843103   65135 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:38:22.845062   65135 out.go:203] 
	W1206 08:38:22.846447   65135 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 08:38:22.847761   65135 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479582 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-479582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (159.807629ms)

                                                
                                                
-- stdout --
	* [functional-479582] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:38:18.725972   63889 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:38:18.726235   63889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:18.726243   63889 out.go:374] Setting ErrFile to fd 2...
	I1206 08:38:18.726248   63889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:18.726540   63889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:38:18.726955   63889 out.go:368] Setting JSON to false
	I1206 08:38:18.727917   63889 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1250,"bootTime":1765009049,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:38:18.727973   63889 start.go:143] virtualization: kvm guest
	I1206 08:38:18.730124   63889 out.go:179] * [functional-479582] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 08:38:18.731423   63889 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:38:18.731422   63889 notify.go:221] Checking for updates...
	I1206 08:38:18.732655   63889 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:38:18.733979   63889 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 08:38:18.735267   63889 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 08:38:18.736560   63889 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:38:18.737636   63889 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:38:18.739303   63889 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 08:38:18.739857   63889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:38:18.763744   63889 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 08:38:18.763827   63889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:38:18.816983   63889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 08:38:18.807404636 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:38:18.817146   63889 docker.go:319] overlay module found
	I1206 08:38:18.818882   63889 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 08:38:18.819902   63889 start.go:309] selected driver: docker
	I1206 08:38:18.819917   63889 start.go:927] validating driver "docker" against &{Name:functional-479582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-479582 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:38:18.820031   63889 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:38:18.821608   63889 out.go:203] 
	W1206 08:38:18.822659   63889 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 08:38:18.823806   63889 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-479582 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-479582 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-hmx6p" [f7a94dd0-5583-48e4-b640-7f44c3f29975] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-hmx6p" [f7a94dd0-5583-48e4-b640-7f44c3f29975] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003773601s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32347
functional_test.go:1680: http://192.168.49.2:32347: success! body:
Request served by hello-node-connect-9f67c86d4-hmx6p

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32347
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c89a0845-26d2-46d2-b205-38082aa87055] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003656828s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-479582 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-479582 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-479582 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-479582 apply -f testdata/storage-provisioner/pod.yaml
I1206 08:38:10.090657    9158 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c7ca3957-0163-4b79-b35e-e734c4950010] Pending
helpers_test.go:352: "sp-pod" [c7ca3957-0163-4b79-b35e-e734c4950010] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c7ca3957-0163-4b79-b35e-e734c4950010] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003734983s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-479582 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-479582 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-479582 apply -f testdata/storage-provisioner/pod.yaml
I1206 08:38:21.014717    9158 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [86a548ac-17ed-4e46-898a-282358f89133] Pending
helpers_test.go:352: "sp-pod" [86a548ac-17ed-4e46-898a-282358f89133] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [86a548ac-17ed-4e46-898a-282358f89133] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004542023s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-479582 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh -n functional-479582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cp functional-479582:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp4218030466/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh -n functional-479582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh -n functional-479582 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-479582 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-ct4gh" [ee82e389-d5cc-4f8d-8f00-eaacbdf933ab] Pending
helpers_test.go:352: "mysql-844cf969f6-ct4gh" [ee82e389-d5cc-4f8d-8f00-eaacbdf933ab] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-ct4gh" [ee82e389-d5cc-4f8d-8f00-eaacbdf933ab] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 14.002995541s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-479582 exec mysql-844cf969f6-ct4gh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-479582 exec mysql-844cf969f6-ct4gh -- mysql -ppassword -e "show databases;": exit status 1 (91.58646ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:38:15.799277    9158 retry.go:31] will retry after 1.177462553s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-479582 exec mysql-844cf969f6-ct4gh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9158/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /etc/test/nested/copy/9158/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9158.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /etc/ssl/certs/9158.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9158.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /usr/share/ca-certificates/9158.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/91582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /etc/ssl/certs/91582.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/91582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /usr/share/ca-certificates/91582.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-479582 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh "sudo systemctl is-active docker": exit status 1 (330.313057ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh "sudo systemctl is-active containerd": exit status 1 (337.201812ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479582 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-479582
localhost/kicbase/echo-server:functional-479582
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479582 image ls --format short --alsologtostderr:
I1206 08:38:29.314159   68058 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:29.314299   68058 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.314317   68058 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:29.314325   68058 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.314620   68058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:38:29.315469   68058 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.315614   68058 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.316247   68058 cli_runner.go:164] Run: docker container inspect functional-479582 --format={{.State.Status}}
I1206 08:38:29.337493   68058 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:29.337533   68058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479582
I1206 08:38:29.356015   68058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-479582/id_rsa Username:docker}
I1206 08:38:29.452581   68058 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479582 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ localhost/minikube-local-cache-test     │ functional-479582  │ 8367287732a5a │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-479582  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479582 image ls --format table --alsologtostderr:
I1206 08:38:29.823228   68568 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:29.823381   68568 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.823393   68568 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:29.823399   68568 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.823713   68568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:38:29.824574   68568 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.824722   68568 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.825387   68568 cli_runner.go:164] Run: docker container inspect functional-479582 --format={{.State.Status}}
I1206 08:38:29.845740   68568 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:29.845789   68568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479582
I1206 08:38:29.866930   68568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-479582/id_rsa Username:docker}
I1206 08:38:29.961404   68568 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479582 image ls --format json --alsologtostderr:
[{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"re
poTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aa
e68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-479582"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
,"gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"8367287732a5a79c841783f40fbe437c2ebc772f18dfc2a69b3a4fa00f797843","repoDigests":["localhost/minikube-local-cache-test@sha256:8dad04c3fc1ea2d9d960bc6af32a3e9b213baf92fc6cf7144c9c45d33a49bdd6"],"repoTags":["localhost/minikube-local-cache-test:functional-479582"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9a
c2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379
124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"s
ize":"155491845"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha2
56:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479582 image ls --format json --alsologtostderr:
I1206 08:38:29.660631   68406 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:29.660883   68406 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.660893   68406 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:29.660897   68406 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.661123   68406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:38:29.661686   68406 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.661799   68406 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.662279   68406 cli_runner.go:164] Run: docker container inspect functional-479582 --format={{.State.Status}}
I1206 08:38:29.680970   68406 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:29.681036   68406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479582
I1206 08:38:29.698084   68406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-479582/id_rsa Username:docker}
I1206 08:38:29.796771   68406 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479582 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 8367287732a5a79c841783f40fbe437c2ebc772f18dfc2a69b3a4fa00f797843
repoDigests:
- localhost/minikube-local-cache-test@sha256:8dad04c3fc1ea2d9d960bc6af32a3e9b213baf92fc6cf7144c9c45d33a49bdd6
repoTags:
- localhost/minikube-local-cache-test:functional-479582
size: "3330"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-479582
size: "4943877"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479582 image ls --format yaml --alsologtostderr:
I1206 08:38:29.404494   68188 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:29.404601   68188 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.404612   68188 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:29.404619   68188 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.404840   68188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:38:29.405429   68188 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.405538   68188 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.405977   68188 cli_runner.go:164] Run: docker container inspect functional-479582 --format={{.State.Status}}
I1206 08:38:29.426317   68188 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:29.426367   68188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479582
I1206 08:38:29.445232   68188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-479582/id_rsa Username:docker}
I1206 08:38:29.544750   68188 ssh_runner.go:195] Run: sudo crictl images --output json
W1206 08:38:29.585468   68188 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 8a8aa99b-1a43-407b-ba4d-f00b8dab5d78
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh pgrep buildkitd: exit status 1 (296.553516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image build -t localhost/my-image:functional-479582 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 image build -t localhost/my-image:functional-479582 testdata/build --alsologtostderr: (2.475499919s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479582 image build -t localhost/my-image:functional-479582 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cbd256e51a4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-479582
--> 4b96fbb805d
Successfully tagged localhost/my-image:functional-479582
4b96fbb805dce7ee93da0b5454a2cd3c56a9f691d884909119b4389a1324319c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479582 image build -t localhost/my-image:functional-479582 testdata/build --alsologtostderr:
I1206 08:38:29.862246   68581 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:29.885555   68581 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.885601   68581 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:29.885613   68581 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:29.885935   68581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
I1206 08:38:29.886805   68581 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.887610   68581 config.go:182] Loaded profile config "functional-479582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:29.888083   68581 cli_runner.go:164] Run: docker container inspect functional-479582 --format={{.State.Status}}
I1206 08:38:29.908262   68581 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:29.908335   68581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479582
I1206 08:38:29.927034   68581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/functional-479582/id_rsa Username:docker}
I1206 08:38:30.019471   68581 build_images.go:162] Building image from path: /tmp/build.1430528545.tar
I1206 08:38:30.019545   68581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 08:38:30.027329   68581 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1430528545.tar
I1206 08:38:30.031255   68581 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1430528545.tar: stat -c "%s %y" /var/lib/minikube/build/build.1430528545.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1430528545.tar': No such file or directory
I1206 08:38:30.031284   68581 ssh_runner.go:362] scp /tmp/build.1430528545.tar --> /var/lib/minikube/build/build.1430528545.tar (3072 bytes)
I1206 08:38:30.048794   68581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1430528545
I1206 08:38:30.056385   68581 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1430528545 -xf /var/lib/minikube/build/build.1430528545.tar
I1206 08:38:30.064110   68581 crio.go:315] Building image: /var/lib/minikube/build/build.1430528545
I1206 08:38:30.064179   68581 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-479582 /var/lib/minikube/build/build.1430528545 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 08:38:32.245774   68581 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-479582 /var/lib/minikube/build/build.1430528545 --cgroup-manager=cgroupfs: (2.181553072s)
I1206 08:38:32.245855   68581 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1430528545
I1206 08:38:32.254854   68581 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1430528545.tar
I1206 08:38:32.262673   68581 build_images.go:218] Built localhost/my-image:functional-479582 from /tmp/build.1430528545.tar
I1206 08:38:32.262703   68581 build_images.go:134] succeeded building to: functional-479582
I1206 08:38:32.262708   68581 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-479582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image load --daemon kicbase/echo-server:functional-479582 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 image load --daemon kicbase/echo-server:functional-479582 --alsologtostderr: (1.137568811s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "429.609909ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "111.291683ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image load --daemon kicbase/echo-server:functional-479582 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "503.745261ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "89.159602ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-479582 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-479582 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-479582 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 61448: os: process already finished
helpers_test.go:519: unable to terminate pid 61098: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-479582 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-479582 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (12.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-479582 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [24043cc9-24ee-4bbf-8b38-1ff49f7a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [24043cc9-24ee-4bbf-8b38-1ff49f7a80a1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003631214s
I1206 08:38:16.134967    9158 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (12.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-479582
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image load --daemon kicbase/echo-server:functional-479582 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 image load --daemon kicbase/echo-server:functional-479582 --alsologtostderr: (3.731231483s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (4.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image save kicbase/echo-server:functional-479582 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 image save kicbase/echo-server:functional-479582 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.175649138s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image rm kicbase/echo-server:functional-479582 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-479582
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 image save --daemon kicbase/echo-server:functional-479582 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-479582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-479582 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.104.215 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-479582 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-479582 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-479582 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-d9jrc" [431afb52-75d4-40a3-a4a1-6ab6afc6276d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-d9jrc" [431afb52-75d4-40a3-a4a1-6ab6afc6276d] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003875578s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo228527721/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765010298830365960" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo228527721/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765010298830365960" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo228527721/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765010298830365960" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo228527721/001/test-1765010298830365960
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.589633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:38:19.108263    9158 retry.go:31] will retry after 629.440696ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 08:38 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 08:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 08:38 test-1765010298830365960
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh cat /mount-9p/test-1765010298830365960
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-479582 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [a8c934bb-f0d5-4de4-96b0-3d70f8101d8e] Pending
helpers_test.go:352: "busybox-mount" [a8c934bb-f0d5-4de4-96b0-3d70f8101d8e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [a8c934bb-f0d5-4de4-96b0-3d70f8101d8e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [a8c934bb-f0d5-4de4-96b0-3d70f8101d8e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004025286s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-479582 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo228527721/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 service list: (1.735637659s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3308892377/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.119378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:38:26.164847    9158 retry.go:31] will retry after 432.025199ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3308892377/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh "sudo umount -f /mount-9p": exit status 1 (334.32416ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-479582 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3308892377/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-479582 service list -o json: (1.803686332s)
functional_test.go:1504: Took "1.803888484s" to run "out/minikube-linux-amd64 -p functional-479582 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo980758628/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo980758628/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo980758628/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T" /mount1: exit status 1 (392.347593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:38:28.197118    9158 retry.go:31] will retry after 586.738427ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T" /mount2
2025/12/06 08:38:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-479582 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo980758628/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo980758628/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo980758628/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31093
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-479582 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31093
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-479582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-479582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-479582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (154.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1206 08:40:16.551603    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.214844    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.221276    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.232642    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.254096    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.295477    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.376926    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.538450    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:43.860016    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:44.262798    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:44.502218    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:45.783795    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:48.345844    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:40:53.467169    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:41:03.708804    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m33.700013806s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (154.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 kubectl -- rollout status deployment/busybox: (2.652916318s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-44t4k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-4mglf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-dnfsx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-44t4k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-4mglf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-dnfsx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-44t4k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-4mglf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-dnfsx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-44t4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-44t4k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-4mglf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-4mglf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-dnfsx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 kubectl -- exec busybox-7b57f96db7-dnfsx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node add --alsologtostderr -v 5
E1206 08:41:24.190137    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:05.152165    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 node add --alsologtostderr -v 5: (52.702276489s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-613409 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp testdata/cp-test.txt ha-613409:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1798596416/001/cp-test_ha-613409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409:/home/docker/cp-test.txt ha-613409-m02:/home/docker/cp-test_ha-613409_ha-613409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test_ha-613409_ha-613409-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409:/home/docker/cp-test.txt ha-613409-m03:/home/docker/cp-test_ha-613409_ha-613409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test_ha-613409_ha-613409-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409:/home/docker/cp-test.txt ha-613409-m04:/home/docker/cp-test_ha-613409_ha-613409-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test_ha-613409_ha-613409-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp testdata/cp-test.txt ha-613409-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1798596416/001/cp-test_ha-613409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m02:/home/docker/cp-test.txt ha-613409:/home/docker/cp-test_ha-613409-m02_ha-613409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test_ha-613409-m02_ha-613409.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m02:/home/docker/cp-test.txt ha-613409-m03:/home/docker/cp-test_ha-613409-m02_ha-613409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test_ha-613409-m02_ha-613409-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m02:/home/docker/cp-test.txt ha-613409-m04:/home/docker/cp-test_ha-613409-m02_ha-613409-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test_ha-613409-m02_ha-613409-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp testdata/cp-test.txt ha-613409-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1798596416/001/cp-test_ha-613409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m03:/home/docker/cp-test.txt ha-613409:/home/docker/cp-test_ha-613409-m03_ha-613409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test_ha-613409-m03_ha-613409.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m03:/home/docker/cp-test.txt ha-613409-m02:/home/docker/cp-test_ha-613409-m03_ha-613409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test_ha-613409-m03_ha-613409-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m03:/home/docker/cp-test.txt ha-613409-m04:/home/docker/cp-test_ha-613409-m03_ha-613409-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test_ha-613409-m03_ha-613409-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp testdata/cp-test.txt ha-613409-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1798596416/001/cp-test_ha-613409-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m04:/home/docker/cp-test.txt ha-613409:/home/docker/cp-test_ha-613409-m04_ha-613409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409 "sudo cat /home/docker/cp-test_ha-613409-m04_ha-613409.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m04:/home/docker/cp-test.txt ha-613409-m02:/home/docker/cp-test_ha-613409-m04_ha-613409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m02 "sudo cat /home/docker/cp-test_ha-613409-m04_ha-613409-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 cp ha-613409-m04:/home/docker/cp-test.txt ha-613409-m03:/home/docker/cp-test_ha-613409-m04_ha-613409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 ssh -n ha-613409-m03 "sudo cat /home/docker/cp-test_ha-613409-m04_ha-613409-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 node stop m02 --alsologtostderr -v 5: (13.142719362s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5: exit status 7 (693.572792ms)

                                                
                                                
-- stdout --
	ha-613409
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-613409-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-613409-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-613409-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:42:40.930618   88688 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:42:40.930727   88688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:42:40.930739   88688 out.go:374] Setting ErrFile to fd 2...
	I1206 08:42:40.930744   88688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:42:40.930966   88688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:42:40.931184   88688 out.go:368] Setting JSON to false
	I1206 08:42:40.931211   88688 mustload.go:66] Loading cluster: ha-613409
	I1206 08:42:40.931344   88688 notify.go:221] Checking for updates...
	I1206 08:42:40.931641   88688 config.go:182] Loaded profile config "ha-613409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:42:40.931656   88688 status.go:174] checking status of ha-613409 ...
	I1206 08:42:40.932156   88688 cli_runner.go:164] Run: docker container inspect ha-613409 --format={{.State.Status}}
	I1206 08:42:40.951313   88688 status.go:371] ha-613409 host status = "Running" (err=<nil>)
	I1206 08:42:40.951336   88688 host.go:66] Checking if "ha-613409" exists ...
	I1206 08:42:40.951597   88688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-613409
	I1206 08:42:40.969707   88688 host.go:66] Checking if "ha-613409" exists ...
	I1206 08:42:40.969941   88688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:42:40.970017   88688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-613409
	I1206 08:42:40.988727   88688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/ha-613409/id_rsa Username:docker}
	I1206 08:42:41.080416   88688 ssh_runner.go:195] Run: systemctl --version
	I1206 08:42:41.086825   88688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:42:41.099497   88688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:42:41.156677   88688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 08:42:41.147042374 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:42:41.157346   88688 kubeconfig.go:125] found "ha-613409" server: "https://192.168.49.254:8443"
	I1206 08:42:41.157379   88688 api_server.go:166] Checking apiserver status ...
	I1206 08:42:41.157420   88688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:42:41.169147   88688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	W1206 08:42:41.177833   88688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:42:41.177892   88688 ssh_runner.go:195] Run: ls
	I1206 08:42:41.181774   88688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 08:42:41.185829   88688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 08:42:41.185869   88688 status.go:463] ha-613409 apiserver status = Running (err=<nil>)
	I1206 08:42:41.185881   88688 status.go:176] ha-613409 status: &{Name:ha-613409 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:42:41.185901   88688 status.go:174] checking status of ha-613409-m02 ...
	I1206 08:42:41.186200   88688 cli_runner.go:164] Run: docker container inspect ha-613409-m02 --format={{.State.Status}}
	I1206 08:42:41.204389   88688 status.go:371] ha-613409-m02 host status = "Stopped" (err=<nil>)
	I1206 08:42:41.204409   88688 status.go:384] host is not running, skipping remaining checks
	I1206 08:42:41.204415   88688 status.go:176] ha-613409-m02 status: &{Name:ha-613409-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:42:41.204433   88688 status.go:174] checking status of ha-613409-m03 ...
	I1206 08:42:41.204655   88688 cli_runner.go:164] Run: docker container inspect ha-613409-m03 --format={{.State.Status}}
	I1206 08:42:41.223341   88688 status.go:371] ha-613409-m03 host status = "Running" (err=<nil>)
	I1206 08:42:41.223376   88688 host.go:66] Checking if "ha-613409-m03" exists ...
	I1206 08:42:41.223609   88688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-613409-m03
	I1206 08:42:41.242421   88688 host.go:66] Checking if "ha-613409-m03" exists ...
	I1206 08:42:41.242758   88688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:42:41.242806   88688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-613409-m03
	I1206 08:42:41.262004   88688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/ha-613409-m03/id_rsa Username:docker}
	I1206 08:42:41.354688   88688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:42:41.368181   88688 kubeconfig.go:125] found "ha-613409" server: "https://192.168.49.254:8443"
	I1206 08:42:41.368210   88688 api_server.go:166] Checking apiserver status ...
	I1206 08:42:41.368250   88688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:42:41.379903   88688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1181/cgroup
	W1206 08:42:41.388974   88688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1181/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:42:41.389045   88688 ssh_runner.go:195] Run: ls
	I1206 08:42:41.393186   88688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 08:42:41.397320   88688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 08:42:41.397342   88688 status.go:463] ha-613409-m03 apiserver status = Running (err=<nil>)
	I1206 08:42:41.397349   88688 status.go:176] ha-613409-m03 status: &{Name:ha-613409-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:42:41.397381   88688 status.go:174] checking status of ha-613409-m04 ...
	I1206 08:42:41.397609   88688 cli_runner.go:164] Run: docker container inspect ha-613409-m04 --format={{.State.Status}}
	I1206 08:42:41.415139   88688 status.go:371] ha-613409-m04 host status = "Running" (err=<nil>)
	I1206 08:42:41.415160   88688 host.go:66] Checking if "ha-613409-m04" exists ...
	I1206 08:42:41.415387   88688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-613409-m04
	I1206 08:42:41.437058   88688 host.go:66] Checking if "ha-613409-m04" exists ...
	I1206 08:42:41.437422   88688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:42:41.437487   88688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-613409-m04
	I1206 08:42:41.457271   88688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/ha-613409-m04/id_rsa Username:docker}
	I1206 08:42:41.549958   88688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:42:41.564253   88688 status.go:176] ha-613409-m04 status: &{Name:ha-613409-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 node start m02 --alsologtostderr -v 5: (13.782229952s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 stop --alsologtostderr -v 5
E1206 08:43:01.704594    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:01.710910    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:01.722207    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:01.743568    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:01.785004    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:01.866433    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:02.027948    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:02.349601    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:02.991797    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:04.273131    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:06.835303    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:11.957246    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:22.198689    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:27.073793    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 stop --alsologtostderr -v 5: (42.755110838s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 start --wait true --alsologtostderr -v 5
E1206 08:43:42.680621    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:44:23.642782    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:45:16.552492    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:45:43.215645    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:45:45.564759    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:46:10.915164    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 start --wait true --alsologtostderr -v 5: (2m39.958167974s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 node delete m03 --alsologtostderr -v 5: (9.263209839s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 stop --alsologtostderr -v 5: (47.935638946s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5: exit status 7 (116.065852ms)

                                                
                                                
-- stdout --
	ha-613409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-613409-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-613409-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:47:19.446435  103581 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:47:19.446654  103581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:19.446662  103581 out.go:374] Setting ErrFile to fd 2...
	I1206 08:47:19.446666  103581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:19.446854  103581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:47:19.447024  103581 out.go:368] Setting JSON to false
	I1206 08:47:19.447045  103581 mustload.go:66] Loading cluster: ha-613409
	I1206 08:47:19.447211  103581 notify.go:221] Checking for updates...
	I1206 08:47:19.447404  103581 config.go:182] Loaded profile config "ha-613409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:47:19.447417  103581 status.go:174] checking status of ha-613409 ...
	I1206 08:47:19.447864  103581 cli_runner.go:164] Run: docker container inspect ha-613409 --format={{.State.Status}}
	I1206 08:47:19.466923  103581 status.go:371] ha-613409 host status = "Stopped" (err=<nil>)
	I1206 08:47:19.466941  103581 status.go:384] host is not running, skipping remaining checks
	I1206 08:47:19.466947  103581 status.go:176] ha-613409 status: &{Name:ha-613409 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:47:19.467002  103581 status.go:174] checking status of ha-613409-m02 ...
	I1206 08:47:19.467292  103581 cli_runner.go:164] Run: docker container inspect ha-613409-m02 --format={{.State.Status}}
	I1206 08:47:19.486191  103581 status.go:371] ha-613409-m02 host status = "Stopped" (err=<nil>)
	I1206 08:47:19.486212  103581 status.go:384] host is not running, skipping remaining checks
	I1206 08:47:19.486218  103581 status.go:176] ha-613409-m02 status: &{Name:ha-613409-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:47:19.486235  103581 status.go:174] checking status of ha-613409-m04 ...
	I1206 08:47:19.486513  103581 cli_runner.go:164] Run: docker container inspect ha-613409-m04 --format={{.State.Status}}
	I1206 08:47:19.504420  103581 status.go:371] ha-613409-m04 host status = "Stopped" (err=<nil>)
	I1206 08:47:19.504440  103581 status.go:384] host is not running, skipping remaining checks
	I1206 08:47:19.504445  103581 status.go:176] ha-613409-m04 status: &{Name:ha-613409-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1206 08:48:01.705374    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.377392746s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (56.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 node add --control-plane --alsologtostderr -v 5
E1206 08:48:29.406323    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-613409 node add --control-plane --alsologtostderr -v 5: (56.101237145s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-613409 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (56.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-632983 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-632983 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.373861545s)
--- PASS: TestJSONOutput/start/Command (37.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-632983 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-632983 --output=json --user=testUser: (7.98986381s)
--- PASS: TestJSONOutput/stop/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-806429 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-806429 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.448091ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa5b4211-bf0d-4ba1-a251-4e04d51e57b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-806429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"18e03dc7-edb5-443e-a397-05934f2c30d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"5bbc8dcd-67ba-4c02-96ca-0fb6c74e590b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"58131c35-ce00-4e4b-8dc6-1e7a71fae01f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig"}}
	{"specversion":"1.0","id":"fa9cda30-c124-4ea3-8a5e-75dedaebad70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube"}}
	{"specversion":"1.0","id":"54c54d6f-0ce0-4c95-94f3-e60956142af7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"faa05a65-5cf0-4db7-a709-06117cd52dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"db81a6e8-1a6b-427d-bfbf-1fc968d815f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-806429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-806429
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-913743 --network=
E1206 08:50:16.550632    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:50:43.215194    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-913743 --network=: (27.588696266s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-913743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-913743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-913743: (2.12674047s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-878590 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-878590 --network=bridge: (19.64680321s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-878590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-878590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-878590: (2.003700333s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.67s)

                                                
                                    
x
+
TestKicExistingNetwork (26.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 08:51:07.792294    9158 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 08:51:07.809584    9158 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 08:51:07.809652    9158 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 08:51:07.809685    9158 cli_runner.go:164] Run: docker network inspect existing-network
W1206 08:51:07.826612    9158 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 08:51:07.826660    9158 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 08:51:07.826680    9158 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 08:51:07.826833    9158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 08:51:07.844952    9158 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9cbe8712784d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:e7:96:d9:b6:56} reservation:<nil>}
I1206 08:51:07.845437    9158 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56e50}
I1206 08:51:07.845474    9158 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 08:51:07.845523    9158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 08:51:07.893890    9158 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-911484 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-911484 --network=existing-network: (23.923346923s)
helpers_test.go:175: Cleaning up "existing-network-911484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-911484
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-911484: (2.019522927s)
I1206 08:51:33.854320    9158 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.08s)

                                                
                                    
x
+
TestKicCustomSubnet (28.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-376661 --subnet=192.168.60.0/24
E1206 08:51:39.626173    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-376661 --subnet=192.168.60.0/24: (25.938259664s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-376661 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-376661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-376661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-376661: (2.159891072s)
--- PASS: TestKicCustomSubnet (28.12s)

                                                
                                    
x
+
TestKicStaticIP (26.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-850043 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-850043 --static-ip=192.168.200.200: (24.03947202s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-850043 ip
helpers_test.go:175: Cleaning up "static-ip-850043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-850043
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-850043: (2.119577801s)
--- PASS: TestKicStaticIP (26.31s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (43.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-809574 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-809574 --driver=docker  --container-runtime=crio: (18.719321967s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-812419 --driver=docker  --container-runtime=crio
E1206 08:53:01.709272    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-812419 --driver=docker  --container-runtime=crio: (19.191144552s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-809574
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-812419
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-812419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-812419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-812419: (2.331170252s)
helpers_test.go:175: Cleaning up "first-809574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-809574
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-809574: (2.343564288s)
--- PASS: TestMinikubeProfile (43.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-496477 --memory=3072 --mount-string /tmp/TestMountStartserial3601424740/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-496477 --memory=3072 --mount-string /tmp/TestMountStartserial3601424740/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.692865255s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-496477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-507337 --memory=3072 --mount-string /tmp/TestMountStartserial3601424740/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-507337 --memory=3072 --mount-string /tmp/TestMountStartserial3601424740/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.000447316s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507337 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-496477 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-496477 --alsologtostderr -v=5: (1.67590083s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507337 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-507337
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-507337: (1.262258334s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-507337
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-507337: (6.15502848s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507337 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.038226282s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-688539 -- rollout status deployment/busybox: (2.289111396s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-956gj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-wlfxc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-956gj -- nslookup kubernetes.default
E1206 08:55:16.550442    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-wlfxc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-956gj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-wlfxc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-956gj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-956gj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-wlfxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-wlfxc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-688539 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-688539 -v=5 --alsologtostderr: (22.512605026s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-688539 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp testdata/cp-test.txt multinode-688539:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2457789354/001/cp-test_multinode-688539.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
E1206 08:55:43.215416    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539:/home/docker/cp-test.txt multinode-688539-m02:/home/docker/cp-test_multinode-688539_multinode-688539-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test_multinode-688539_multinode-688539-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539:/home/docker/cp-test.txt multinode-688539-m03:/home/docker/cp-test_multinode-688539_multinode-688539-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test_multinode-688539_multinode-688539-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp testdata/cp-test.txt multinode-688539-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2457789354/001/cp-test_multinode-688539-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m02:/home/docker/cp-test.txt multinode-688539:/home/docker/cp-test_multinode-688539-m02_multinode-688539.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test_multinode-688539-m02_multinode-688539.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m02:/home/docker/cp-test.txt multinode-688539-m03:/home/docker/cp-test_multinode-688539-m02_multinode-688539-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test_multinode-688539-m02_multinode-688539-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp testdata/cp-test.txt multinode-688539-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2457789354/001/cp-test_multinode-688539-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt multinode-688539:/home/docker/cp-test_multinode-688539-m03_multinode-688539.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test_multinode-688539-m03_multinode-688539.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt multinode-688539-m02:/home/docker/cp-test_multinode-688539-m03_multinode-688539-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test_multinode-688539-m03_multinode-688539-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 node stop m03: (1.278024795s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status: exit status 7 (492.693467ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr: exit status 7 (495.224654ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:55:53.344294  163377 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:55:53.344568  163377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:55:53.344577  163377 out.go:374] Setting ErrFile to fd 2...
	I1206 08:55:53.344582  163377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:55:53.344840  163377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:55:53.345064  163377 out.go:368] Setting JSON to false
	I1206 08:55:53.345091  163377 mustload.go:66] Loading cluster: multinode-688539
	I1206 08:55:53.345179  163377 notify.go:221] Checking for updates...
	I1206 08:55:53.345503  163377 config.go:182] Loaded profile config "multinode-688539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:55:53.345520  163377 status.go:174] checking status of multinode-688539 ...
	I1206 08:55:53.345974  163377 cli_runner.go:164] Run: docker container inspect multinode-688539 --format={{.State.Status}}
	I1206 08:55:53.368454  163377 status.go:371] multinode-688539 host status = "Running" (err=<nil>)
	I1206 08:55:53.368495  163377 host.go:66] Checking if "multinode-688539" exists ...
	I1206 08:55:53.368785  163377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-688539
	I1206 08:55:53.386864  163377 host.go:66] Checking if "multinode-688539" exists ...
	I1206 08:55:53.387176  163377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:55:53.387251  163377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-688539
	I1206 08:55:53.405504  163377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/multinode-688539/id_rsa Username:docker}
	I1206 08:55:53.496483  163377 ssh_runner.go:195] Run: systemctl --version
	I1206 08:55:53.503410  163377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:55:53.516164  163377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:55:53.572374  163377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-06 08:55:53.562206209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 08:55:53.572873  163377 kubeconfig.go:125] found "multinode-688539" server: "https://192.168.67.2:8443"
	I1206 08:55:53.572907  163377 api_server.go:166] Checking apiserver status ...
	I1206 08:55:53.572938  163377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:53.584745  163377 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1206 08:55:53.593391  163377 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:55:53.593445  163377 ssh_runner.go:195] Run: ls
	I1206 08:55:53.597285  163377 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 08:55:53.602274  163377 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 08:55:53.602305  163377 status.go:463] multinode-688539 apiserver status = Running (err=<nil>)
	I1206 08:55:53.602317  163377 status.go:176] multinode-688539 status: &{Name:multinode-688539 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:55:53.602334  163377 status.go:174] checking status of multinode-688539-m02 ...
	I1206 08:55:53.602623  163377 cli_runner.go:164] Run: docker container inspect multinode-688539-m02 --format={{.State.Status}}
	I1206 08:55:53.620774  163377 status.go:371] multinode-688539-m02 host status = "Running" (err=<nil>)
	I1206 08:55:53.620797  163377 host.go:66] Checking if "multinode-688539-m02" exists ...
	I1206 08:55:53.621078  163377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-688539-m02
	I1206 08:55:53.640145  163377 host.go:66] Checking if "multinode-688539-m02" exists ...
	I1206 08:55:53.640391  163377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:55:53.640437  163377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-688539-m02
	I1206 08:55:53.657907  163377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22049-5617/.minikube/machines/multinode-688539-m02/id_rsa Username:docker}
	I1206 08:55:53.749259  163377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:55:53.761645  163377 status.go:176] multinode-688539-m02 status: &{Name:multinode-688539-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:55:53.761680  163377 status.go:174] checking status of multinode-688539-m03 ...
	I1206 08:55:53.761915  163377 cli_runner.go:164] Run: docker container inspect multinode-688539-m03 --format={{.State.Status}}
	I1206 08:55:53.780107  163377 status.go:371] multinode-688539-m03 host status = "Stopped" (err=<nil>)
	I1206 08:55:53.780132  163377 status.go:384] host is not running, skipping remaining checks
	I1206 08:55:53.780144  163377 status.go:176] multinode-688539-m03 status: &{Name:multinode-688539-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 node start m03 -v=5 --alsologtostderr: (6.426701309s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688539
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-688539
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-688539: (29.690251738s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr
E1206 08:57:06.277883    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr: (48.811014717s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688539
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 node delete m03: (4.657733589s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 stop: (28.272961707s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status: exit status 7 (100.508071ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr: exit status 7 (97.925307ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:57:53.220042  173184 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:57:53.220146  173184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:57:53.220154  173184 out.go:374] Setting ErrFile to fd 2...
	I1206 08:57:53.220158  173184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:57:53.220333  173184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 08:57:53.220499  173184 out.go:368] Setting JSON to false
	I1206 08:57:53.220523  173184 mustload.go:66] Loading cluster: multinode-688539
	I1206 08:57:53.220574  173184 notify.go:221] Checking for updates...
	I1206 08:57:53.220821  173184 config.go:182] Loaded profile config "multinode-688539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:57:53.220834  173184 status.go:174] checking status of multinode-688539 ...
	I1206 08:57:53.221246  173184 cli_runner.go:164] Run: docker container inspect multinode-688539 --format={{.State.Status}}
	I1206 08:57:53.241160  173184 status.go:371] multinode-688539 host status = "Stopped" (err=<nil>)
	I1206 08:57:53.241205  173184 status.go:384] host is not running, skipping remaining checks
	I1206 08:57:53.241219  173184 status.go:176] multinode-688539 status: &{Name:multinode-688539 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:57:53.241248  173184 status.go:174] checking status of multinode-688539-m02 ...
	I1206 08:57:53.241536  173184 cli_runner.go:164] Run: docker container inspect multinode-688539-m02 --format={{.State.Status}}
	I1206 08:57:53.260167  173184 status.go:371] multinode-688539-m02 host status = "Stopped" (err=<nil>)
	I1206 08:57:53.260195  173184 status.go:384] host is not running, skipping remaining checks
	I1206 08:57:53.260203  173184 status.go:176] multinode-688539-m02 status: &{Name:multinode-688539-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1206 08:58:01.705616    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.193282747s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.79s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688539
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-688539-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.88541ms)

                                                
                                                
-- stdout --
	* [multinode-688539-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-688539-m02' is duplicated with machine name 'multinode-688539-m02' in profile 'multinode-688539'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539-m03 --driver=docker  --container-runtime=crio: (21.514483125s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-688539
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-688539: exit status 80 (291.763237ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-688539 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-688539-m03 already exists in multinode-688539-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-688539-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-688539-m03: (2.324896036s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.27s)

                                                
                                    
x
+
TestPreload (98.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-790735 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1206 08:59:24.768614    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-790735 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (44.843745026s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-790735 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-790735 image pull gcr.io/k8s-minikube/busybox: (1.617532939s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-790735
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-790735: (6.23536987s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-790735 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1206 09:00:16.550893    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:00:43.214575    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-790735 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (42.842971388s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-790735 image list
helpers_test.go:175: Cleaning up "test-preload-790735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-790735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-790735: (2.39000677s)
--- PASS: TestPreload (98.16s)

                                                
                                    
x
+
TestScheduledStopUnix (99.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-735357 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-735357 --memory=3072 --driver=docker  --container-runtime=crio: (23.059430445s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-735357 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:01:14.835818  190250 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:01:14.836178  190250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:01:14.836191  190250 out.go:374] Setting ErrFile to fd 2...
	I1206 09:01:14.836195  190250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:01:14.836427  190250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:01:14.836676  190250 out.go:368] Setting JSON to false
	I1206 09:01:14.836766  190250 mustload.go:66] Loading cluster: scheduled-stop-735357
	I1206 09:01:14.837127  190250 config.go:182] Loaded profile config "scheduled-stop-735357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:01:14.837226  190250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/config.json ...
	I1206 09:01:14.837426  190250 mustload.go:66] Loading cluster: scheduled-stop-735357
	I1206 09:01:14.837525  190250 config.go:182] Loaded profile config "scheduled-stop-735357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-735357 -n scheduled-stop-735357
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:01:15.223787  190401 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:01:15.224062  190401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:01:15.224072  190401 out.go:374] Setting ErrFile to fd 2...
	I1206 09:01:15.224077  190401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:01:15.224319  190401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:01:15.224561  190401 out.go:368] Setting JSON to false
	I1206 09:01:15.224745  190401 daemonize_unix.go:73] killing process 190285 as it is an old scheduled stop
	I1206 09:01:15.224850  190401 mustload.go:66] Loading cluster: scheduled-stop-735357
	I1206 09:01:15.225206  190401 config.go:182] Loaded profile config "scheduled-stop-735357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:01:15.225280  190401 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/config.json ...
	I1206 09:01:15.225466  190401 mustload.go:66] Loading cluster: scheduled-stop-735357
	I1206 09:01:15.225564  190401 config.go:182] Loaded profile config "scheduled-stop-735357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 09:01:15.230477    9158 retry.go:31] will retry after 57.488µs: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.231651    9158 retry.go:31] will retry after 219.805µs: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.232790    9158 retry.go:31] will retry after 332.336µs: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.233934    9158 retry.go:31] will retry after 205.599µs: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.235077    9158 retry.go:31] will retry after 620.656µs: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.236210    9158 retry.go:31] will retry after 434.611µs: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.237336    9158 retry.go:31] will retry after 1.401784ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.239536    9158 retry.go:31] will retry after 2.29571ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.242763    9158 retry.go:31] will retry after 2.984128ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.245952    9158 retry.go:31] will retry after 5.412271ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.252165    9158 retry.go:31] will retry after 4.118058ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.257388    9158 retry.go:31] will retry after 8.764704ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.266662    9158 retry.go:31] will retry after 9.292959ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.276967    9158 retry.go:31] will retry after 23.410629ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.301308    9158 retry.go:31] will retry after 24.653423ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
I1206 09:01:15.326609    9158 retry.go:31] will retry after 53.78771ms: open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-735357 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-735357 -n scheduled-stop-735357
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-735357
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-735357 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:01:41.134830  190959 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:01:41.135180  190959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:01:41.135191  190959 out.go:374] Setting ErrFile to fd 2...
	I1206 09:01:41.135196  190959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:01:41.135401  190959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:01:41.135622  190959 out.go:368] Setting JSON to false
	I1206 09:01:41.135693  190959 mustload.go:66] Loading cluster: scheduled-stop-735357
	I1206 09:01:41.136036  190959 config.go:182] Loaded profile config "scheduled-stop-735357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:01:41.136104  190959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/scheduled-stop-735357/config.json ...
	I1206 09:01:41.136298  190959 mustload.go:66] Loading cluster: scheduled-stop-735357
	I1206 09:01:41.136388  190959 config.go:182] Loaded profile config "scheduled-stop-735357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-735357
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-735357: exit status 7 (79.429157ms)

                                                
                                                
-- stdout --
	scheduled-stop-735357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-735357 -n scheduled-stop-735357
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-735357 -n scheduled-stop-735357: exit status 7 (78.15045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-735357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-735357
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-735357: (4.954869416s)
--- PASS: TestScheduledStopUnix (99.54s)

                                                
                                    
x
+
TestInsufficientStorage (8.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-423078 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-423078 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.349802783s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40712510-73ff-46f7-b470-0352373b5273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-423078] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d0ce29a-ab97-4ce0-9d9c-8e0f18a6c85a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"b2d9b45c-c5bb-47ed-bd2e-754f94c27346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"28e23407-4071-49ff-a62f-4f66f58342d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig"}}
	{"specversion":"1.0","id":"90363066-821d-4476-abf9-97b7937e54cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube"}}
	{"specversion":"1.0","id":"7ec9f678-1696-47eb-8573-60a70faf7a41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6069fa04-8b2d-4ac9-a2a4-55e717e6c1e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d809aacb-73d5-4f0a-bd6f-8872df0654bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2b508d74-b7e3-46f9-ac6b-2f1494246918","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ee4e0b8d-a34a-4185-9eaa-930a305ef5b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"38c2c325-c55f-4da9-bb73-3071c6109a0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"27c3a2b9-47b2-4f85-92fe-1ace96246b35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-423078\" primary control-plane node in \"insufficient-storage-423078\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8472d873-b0ad-4f03-89c7-6e2256ff974f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cecd0b8-753a-4222-8795-30f025480da9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ba8c95d-bdcb-4dd6-b34c-9509788d2acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-423078 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-423078 --output=json --layout=cluster: exit status 7 (284.913415ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-423078","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-423078","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 09:02:37.878244  193461 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-423078" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-423078 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-423078 --output=json --layout=cluster: exit status 7 (282.111784ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-423078","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-423078","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 09:02:38.161353  193573 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-423078" does not appear in /home/jenkins/minikube-integration/22049-5617/kubeconfig
	E1206 09:02:38.171651  193573 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/insufficient-storage-423078/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-423078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-423078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-423078: (1.893442849s)
--- PASS: TestInsufficientStorage (8.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (43.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3075964946 start -p running-upgrade-405440 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1206 09:05:16.551193    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3075964946 start -p running-upgrade-405440 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.198911056s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-405440 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 09:05:43.215350    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-405440 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.670931955s)
helpers_test.go:175: Cleaning up "running-upgrade-405440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-405440
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-405440: (2.408682359s)
--- PASS: TestRunningBinaryUpgrade (43.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (298s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.244866564s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-702638
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-702638: (1.300811018s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-702638 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-702638 status --format={{.Host}}: exit status 7 (79.257689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m21.174568063s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-702638 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.223378ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-702638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-702638
	    minikube start -p kubernetes-upgrade-702638 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7026382 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-702638 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-702638 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.567184565s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-702638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-702638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-702638: (2.48751145s)
--- PASS: TestKubernetesUpgrade (298.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (86.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2988416884 start -p missing-upgrade-173950 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2988416884 start -p missing-upgrade-173950 --memory=3072 --driver=docker  --container-runtime=crio: (47.051862958s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-173950
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-173950: (1.85656275s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-173950
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-173950 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-173950 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.985221656s)
helpers_test.go:175: Cleaning up "missing-upgrade-173950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-173950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-173950: (2.399895908s)
--- PASS: TestMissingContainerUpgrade (86.81s)

                                                
                                    
x
+
TestPause/serial/Start (46.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-845581 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-845581 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (46.220854241s)
--- PASS: TestPause/serial/Start (46.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-845581 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-845581 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.201169082s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (310.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.592325792 start -p stopped-upgrade-454433 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.592325792 start -p stopped-upgrade-454433 --memory=3072 --vm-driver=docker  --container-runtime=crio: (47.398593799s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.592325792 -p stopped-upgrade-454433 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.592325792 -p stopped-upgrade-454433 stop: (2.308860108s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-454433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-454433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.300497706s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (310.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-328079 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-328079 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (81.385309ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-328079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (20.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-328079 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-328079 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.020631676s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-328079 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (20.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-328079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-328079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.521925411s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-328079 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-328079 status -o json: exit status 2 (306.755219ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-328079","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-328079
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-328079: (1.984899312s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-328079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-328079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.097621215s)
--- PASS: TestNoKubernetes/serial/Start (4.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22049-5617/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-328079 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-328079 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.391184ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (26.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (25.922965919s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (26.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-646473 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-646473 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (172.1402ms)

                                                
                                                
-- stdout --
	* [false-646473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:06:48.874803  247804 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:06:48.874924  247804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:06:48.874934  247804 out.go:374] Setting ErrFile to fd 2...
	I1206 09:06:48.874940  247804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:06:48.875226  247804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5617/.minikube/bin
	I1206 09:06:48.875790  247804 out.go:368] Setting JSON to false
	I1206 09:06:48.877184  247804 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2960,"bootTime":1765009049,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:06:48.877255  247804 start.go:143] virtualization: kvm guest
	I1206 09:06:48.880479  247804 out.go:179] * [false-646473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:06:48.882545  247804 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:06:48.882545  247804 notify.go:221] Checking for updates...
	I1206 09:06:48.885272  247804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:06:48.886724  247804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5617/kubeconfig
	I1206 09:06:48.888018  247804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5617/.minikube
	I1206 09:06:48.889323  247804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:06:48.890473  247804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:06:48.892166  247804 config.go:182] Loaded profile config "NoKubernetes-328079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 09:06:48.892264  247804 config.go:182] Loaded profile config "kubernetes-upgrade-702638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:06:48.892335  247804 config.go:182] Loaded profile config "stopped-upgrade-454433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:06:48.892417  247804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:06:48.918247  247804 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:06:48.918342  247804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:06:48.980720  247804 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 09:06:48.969857954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:06:48.980814  247804 docker.go:319] overlay module found
	I1206 09:06:48.982760  247804 out.go:179] * Using the docker driver based on user configuration
	I1206 09:06:48.984180  247804 start.go:309] selected driver: docker
	I1206 09:06:48.984200  247804 start.go:927] validating driver "docker" against <nil>
	I1206 09:06:48.984214  247804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:06:48.986109  247804 out.go:203] 
	W1206 09:06:48.987385  247804 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 09:06:48.988557  247804 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-646473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-646473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:04:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-702638
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:04:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-454433
contexts:
- context:
cluster: kubernetes-upgrade-702638
user: kubernetes-upgrade-702638
name: kubernetes-upgrade-702638
- context:
cluster: stopped-upgrade-454433
user: stopped-upgrade-454433
name: stopped-upgrade-454433
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-702638
user:
client-certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.crt
client-key: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.key
- name: stopped-upgrade-454433
user:
client-certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/stopped-upgrade-454433/client.crt
client-key: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/stopped-upgrade-454433/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-646473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646473"

                                                
                                                
----------------------- debugLogs end: false-646473 [took: 3.086124475s] --------------------------------
helpers_test.go:175: Cleaning up "false-646473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-646473
--- PASS: TestNetworkPlugins/group/false (3.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.999616777s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-328079
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-328079: (2.655228485s)
--- PASS: TestNoKubernetes/serial/Stop (2.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-328079 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-328079 --driver=docker  --container-runtime=crio: (6.395363187s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-328079 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-328079 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.142151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (45.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.918164265s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (45.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-322324 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b89bf94a-47a6-4b25-9cea-c82defe85ad0] Pending
helpers_test.go:352: "busybox" [b89bf94a-47a6-4b25-9cea-c82defe85ad0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b89bf94a-47a6-4b25-9cea-c82defe85ad0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003302116s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-322324 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-322324 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-322324 --alsologtostderr -v=3: (16.177198622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-769733 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7c052e39-08e6-442e-970a-3e9534e4ea7b] Pending
helpers_test.go:352: "busybox" [7c052e39-08e6-442e-970a-3e9534e4ea7b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7c052e39-08e6-442e-970a-3e9534e4ea7b] Running
E1206 09:08:01.705546    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003692441s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-769733 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-769733 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-769733 --alsologtostderr -v=3: (18.106706766s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324: exit status 7 (77.888306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-322324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1206 09:08:19.628243    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/addons-765040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-322324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.325429296s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-322324 -n old-k8s-version-322324
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733: exit status 7 (79.270028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-769733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (27.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-769733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (26.631838613s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-769733 -n no-preload-769733
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (27.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-454433
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-454433: (1.04739721s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-nz2h8" [18d29a8a-32bc-4048-980e-47fc64356758] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00325144s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (43.760265272s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-62nsl" [c4ee64b5-d6b4-4ecd-babe-35539026efe9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003709082s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-nz2h8" [18d29a8a-32bc-4048-980e-47fc64356758] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003146667s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-769733 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-62nsl" [c4ee64b5-d6b4-4ecd-babe-35539026efe9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004867845s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-322324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-769733 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (1m12.482048207s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-322324 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (24.625718365s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.144257528s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-931091 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0721acc9-3cc1-45bb-b49b-5ab87b43bb99] Pending
helpers_test.go:352: "busybox" [0721acc9-3cc1-45bb-b49b-5ab87b43bb99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0721acc9-3cc1-45bb-b49b-5ab87b43bb99] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003862935s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-931091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-718157 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-718157 --alsologtostderr -v=3: (2.43544584s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157: exit status 7 (82.399836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-718157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-718157 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.577254897s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718157 -n newest-cni-718157
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-931091 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-931091 --alsologtostderr -v=3: (17.421239937s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-718157 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.58188066s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-646473 "pgrep -a kubelet"
I1206 09:10:04.161300    9158 config.go:182] Loaded profile config "auto-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-skzd5" [e4e255c8-cc00-4add-88fe-e5a0b6fefc05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-skzd5" [e4e255c8-cc00-4add-88fe-e5a0b6fefc05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004844299s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091: exit status 7 (104.086761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-931091 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-931091 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (50.273456279s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931091 -n embed-certs-931091
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [88118ce1-5ebb-4136-900b-1521d34ca0ce] Pending
helpers_test.go:352: "busybox" [88118ce1-5ebb-4136-900b-1521d34ca0ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [88118ce1-5ebb-4136-900b-1521d34ca0ce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004506951s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (19.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-213278 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-213278 --alsologtostderr -v=3: (19.546565316s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (19.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (44.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1206 09:10:43.215113    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-012975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (44.649680826s)
--- PASS: TestNetworkPlugins/group/calico/Start (44.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-tcmgc" [7dcb582d-1578-499d-a922-334124bfb426] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007023221s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278: exit status 7 (83.333378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-213278 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-213278 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.857509473s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-213278 -n default-k8s-diff-port-213278
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-646473 "pgrep -a kubelet"
I1206 09:10:52.724376    9158 config.go:182] Loaded profile config "kindnet-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-29bvb" [c5d1f630-6b26-41db-8944-61980b79e107] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-29bvb" [c5d1f630-6b26-41db-8944-61980b79e107] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005011125s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-68gdp" [ffe447cb-d35f-4f6c-9134-d914103a0745] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003355192s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-68gdp" [ffe447cb-d35f-4f6c-9134-d914103a0745] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004541661s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-931091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-931091 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (46.368616077s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-rnvrl" [dadc3577-7caa-45ce-8d41-e5188593e8bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003676095s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.61602797s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-646473 "pgrep -a kubelet"
I1206 09:11:26.737477    9158 config.go:182] Loaded profile config "calico-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t4fnh" [15ed4a91-474f-493b-82ef-ffbfe010e36e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t4fnh" [15ed4a91-474f-493b-82ef-ffbfe010e36e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00420746s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hjxhr" [e257799f-93c1-460e-8143-bc16fc0365fd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009294068s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hjxhr" [e257799f-93c1-460e-8143-bc16fc0365fd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003926712s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-213278 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-213278 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.028303179s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-646473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m4.439850448s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-646473 "pgrep -a kubelet"
I1206 09:12:06.407370    9158 config.go:182] Loaded profile config "custom-flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g4zm8" [a1158ed6-103a-4673-a7f7-124ccfbefd3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g4zm8" [a1158ed6-103a-4673-a7f7-124ccfbefd3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003983976s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-646473 "pgrep -a kubelet"
I1206 09:12:26.875466    9158 config.go:182] Loaded profile config "enable-default-cni-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v9d4w" [52d426d4-0fbc-419b-9602-51143274c648] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v9d4w" [52d426d4-0fbc-419b-9602-51143274c648] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003522655s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-2x9qr" [7255fdea-40d1-479a-ac3e-050ff37d8ae0] Running
E1206 09:12:50.160175    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/old-k8s-version-322324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004646414s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-646473 "pgrep -a kubelet"
I1206 09:12:56.285266    9158 config.go:182] Loaded profile config "flannel-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kthdc" [1d6b7957-c1a6-410f-9865-8a82eb036c94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 09:12:56.737044    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:56.743390    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:56.754731    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:56.776060    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:56.817486    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:56.898934    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:57.060609    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:57.382290    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:58.024234    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kthdc" [1d6b7957-c1a6-410f-9865-8a82eb036c94] Running
E1206 09:12:59.306278    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:13:01.705199    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/functional-479582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:13:01.867641    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004627927s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-646473 "pgrep -a kubelet"
I1206 09:13:03.749119    9158 config.go:182] Loaded profile config "bridge-646473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (7.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-646473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p5qk4" [5e65dc81-20e2-4741-b258-1b3f8bbb811c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p5qk4" [5e65dc81-20e2-4741-b258-1b3f8bbb811c] Running
E1206 09:13:06.989383    9158 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/no-preload-769733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 7.004397223s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (7.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-646473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-646473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
158 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
159 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
160 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
380 TestStartStop/group/disable-driver-mounts 0.18
384 TestNetworkPlugins/group/kubenet 3.36
392 TestNetworkPlugins/group/cilium 3.65
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-217626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-217626
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-646473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-646473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:04:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-702638
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:04:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-454433
contexts:
- context:
cluster: kubernetes-upgrade-702638
user: kubernetes-upgrade-702638
name: kubernetes-upgrade-702638
- context:
cluster: stopped-upgrade-454433
user: stopped-upgrade-454433
name: stopped-upgrade-454433
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-702638
user:
client-certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.crt
client-key: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.key
- name: stopped-upgrade-454433
user:
client-certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/stopped-upgrade-454433/client.crt
client-key: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/stopped-upgrade-454433/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-646473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646473"

                                                
                                                
----------------------- debugLogs end: kubenet-646473 [took: 3.19564735s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-646473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-646473
--- SKIP: TestNetworkPlugins/group/kubenet (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-646473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-646473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:04:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-702638
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5617/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:04:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-454433
contexts:
- context:
cluster: kubernetes-upgrade-702638
user: kubernetes-upgrade-702638
name: kubernetes-upgrade-702638
- context:
cluster: stopped-upgrade-454433
user: stopped-upgrade-454433
name: stopped-upgrade-454433
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-702638
user:
client-certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.crt
client-key: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/kubernetes-upgrade-702638/client.key
- name: stopped-upgrade-454433
user:
client-certificate: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/stopped-upgrade-454433/client.crt
client-key: /home/jenkins/minikube-integration/22049-5617/.minikube/profiles/stopped-upgrade-454433/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-646473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-646473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646473"

                                                
                                                
----------------------- debugLogs end: cilium-646473 [took: 3.488678495s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-646473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-646473
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
Copied to clipboard